Multidimensional Chebyshev's inequality

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In probability theory, the multidimensional Chebyshev's inequality[1] is a generalization of Chebyshev's inequality, which puts a bound on the probability of the event that a random variable differs from its expected value by more than a specified amount.

Let X be an N-dimensional random vector with expected value μ=E[X] and covariance matrix

V=E[(Xμ)(Xμ)T].

If V is a positive-definite matrix, for any real number t>0:

Pr((Xμ)TV1(Xμ)>t)Nt2

Proof

[edit | edit source]

Since V is positive-definite, so is V1. Define the random variable

y=(Xμ)TV1(Xμ).

Since y is positive, Markov's inequality holds:

Pr((Xμ)TV1(Xμ)>t)=Pr(y>t)=Pr(y>t2)E[y]t2.

Finally,

E[y]=E[(Xμ)TV1(Xμ)]=E[trace(V1(Xμ)(Xμ)T)]=trace(V1V)=N.[1][2]

Infinite dimensions

[edit | edit source]

There is a straightforward extension of the vector version of Chebyshev's inequality to infinite dimensional settings[more refs. needed].[3] Let X be a random variable which takes values in a Fréchet space 𝒳 (equipped with seminorms || ⋅ ||α). This includes most common settings of vector-valued random variables, e.g., when 𝒳 is a Banach space (equipped with a single norm), a Hilbert space, or the finite-dimensional setting as described above.

Suppose that X is of "strong order two", meaning that

E(Xα2)<

for every seminorm || ⋅ ||α. This is a generalization of the requirement that X have finite variance, and is necessary for this strong form of Chebyshev's inequality in infinite dimensions. The terminology "strong order two" is due to Vakhania.[4]

Let μ𝒳 be the Pettis integral of X (i.e., the vector generalization of the mean), and let

σa:=EXμα2

be the standard deviation with respect to the seminorm || ⋅ ||α. In this setting we can state the following:

General version of Chebyshev's inequality. k>0:Pr(Xμαkσα)1k2.

Proof. The proof is straightforward, and essentially the same as the finitary version[source needed]. If σα = 0, then X is constant (and equal to μ) almost surely, so the inequality is trivial.

If

Xμαkσα

then ||Xμ||α > 0, so we may safely divide by ||Xμ||α. The crucial trick in Chebyshev's inequality is to recognize that 1=Xμα2Xμα2.

The following calculations complete the proof:

Pr(Xμαkσα)=Ω𝟏XμαkσαdPr=Ω(Xμα2Xμα2)𝟏XμαkσαdPrΩ(Xμα2(kσα)2)𝟏XμαkσαdPr1k2σα2ΩXμα2dPr𝟏Xμαkσα1=1k2σα2(EXμα2)=1k2σα2(σα2)=1k2

References

[edit | edit source]
  1. ^ a b Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  2. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  3. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  4. ^ Vakhania, Nikolai Nikolaevich. Probability distributions on linear spaces. New York: North Holland, 1981.