Content-Length: 560129 | pFad | http://en.m.wikipedia.org/wiki/Sub-Gaussian_distribution

Sub-Gaussian distribution - Wikipedia

Sub-Gaussian distribution

In probability theory, a subgaussian distribution, the distribution of a subgaussian random variable, is a probability distribution with strong tail decay. More specifically, the tails of a subgaussian distribution are dominated by (i.e. decay at least as fast as) the tails of a Gaussian. This property gives subgaussian distributions their name.

Often in analysis, we divide an object (such as a random variable) into two parts, a central bulk and a distant tail, then analyze each separately. In probability, this division usually goes like "Everything interesting happens near the center. The tail event is so rare, we may safely ignore that." Subgaussian distributions are worthy of study, because the gaussian distribution is well-understood, and so we can give sharp bounds on the rarity of the tail event. Similarly, the subexponential distributions are also worthy of study.

Formally, the probability distribution of a random variable is called subgaussian if there is a positive constant C such that for every ,

.

There are many equivalent definitions. For example, a random variable is sub-Gaussian iff its distribution function is bounded from above (up to a constant) by the distribution function of a Gaussian:

where is constant and is a mean zero Gaussian random variable.[1]: Theorem 2.6 

Definitions

edit

Subgaussian norm

edit

The subgaussian norm of  , denoted as  , is In other words, it is the Orlicz norm of   generated by the Orlicz function   By condition   below, subgaussian random variables can be characterized as those random variables with finite subgaussian norm.

Variance proxy

edit

If there exists some   such that   for all  , then   is called a variance proxy, and the smallest such   is called the optimal variance proxy and denoted by  .

Since   when   is Gaussian, we then have  , as it should.

Equivalent definitions

edit

Let   be a random variable. The following conditions are equivalent: (Proposition 2.5.2 [2])

  1. Tail probability bound:   for all  , where   is a positive constant;
  2. Finite subgaussian norm:  ;
  3. Moment:   for all  , where   is a positive constant and   is the Gamma function;
  4. Moment:   for all  ;
  5. Moment-generating function (of  ), or variance proxy[3][4] :   for all  , where   is a positive constant;
  6. Moment-generating function (of  ): for some  ,   for all  ;
  7. Union bound: for some c > 0,   for all n > c, where   are i.i.d copies of X;
  8. Subexponential:   has a subexponential distribution.

Furthermore, the constant   is the same in the definitions (1) to (5), up to an absolute constant. So for example, given a random variable satisfying (1) and (2), the minimal constants   in the two definitions satisfy  , where   are constants independent of the random variable.

Proof of equivalence

edit

As an example, the first four definitions are equivalent by the proof below.

Proof.   By the layer cake representation, 


After a change of variables  , we find that   By the Taylor series   which is less than or equal to   for  . Let  , then  


  By Markov's inequality,   by asymptotic formula for gamma function:  .

From the proof, we can extract a cycle of three inequalities:

  • If  , then   for all  .
  • If   for all  , then  .
  • If  , then  .

In particular, the constant   provided by the definitions are the same up to a constant factor, so we can say that the definitions are equivalent up to a constant independent of  .

Similarly, because up to a positive multiplicative constant,   for all  , the definitions (3) and (4) are also equivalent up to a constant.

Basic properties

edit

Proposition.

  • If   is subgaussian, and  , then   and  .
  • If   are subgaussian, then  .

Proposition. (Chernoff bound) If   is subgaussian, then   for all  .

Definition.   means that  , where the positive constant   is independent of   and  .

Proposition. If   is subgaussian, then  .

Proof. By triangle inequality,  . Now we have  . By the equivalence of definitions (2) and (4) of subgaussianity, given above, we have  .

Proposition. If   are subgaussian and independent, then  .

Proof. If independent, then use that the cumulant of independent random variables is additive. That is,  .

If not independent, then by Hölder's inequality, for any   we have Solving the optimization problem  , we obtain the result.

Corollary. Linear sums of subgaussian random variables are subgaussian.

Strictly subgaussian

edit

Expanding the cumulant generating function: we find that  . At the edge of possibility, we define that a random variable   satisfying   is called strictly subgaussian.

Properties

edit

Theorem.[5] Let   be a subgaussian random variable with mean zero. If all zeros of its characteristic function are real, then   is strictly subgaussian.

Corollary. If   are independent and strictly subgaussian, then any linear sum of them is strictly subgaussian.

Examples

edit

By calculating the characteristic functions, we can show that some distributions are strictly subgaussian: symmetric uniform distribution, symmetric Bernoulli distribution.

Since a symmetric uniform distribution is strictly subgaussian, its convolution with itself is strictly subgaussian. That is, the symmetric triangular distribution is strictly subgaussian.

Since the symmetric Bernoulli distribution is strictly subgaussian, any symmetric Binomial distribution is strictly subgaussian.

Examples

edit
    strictly subgaussian?
gaussian distribution       Yes
mean-zero Bernoulli distribution   solution to     Iff  
symmetric Bernoulli distribution       Yes
uniform distribution   solution to  , approximately 0.7727   Yes
arbitrary distribution on interval    

The optimal variance proxy   is known for many standard probability distributions, including the beta, Bernoulli, Dirichlet[6], Kumaraswamy, triangular[7], truncated Gaussian, and truncated exponential.[8]

Bernoulli distribution

edit

Let   be two positive numbers. Let   be a centered Bernoulli distribution  , so that it has mean zero, then  .[5] Its subgaussian norm is   where   is the unique positive solution to  .

Let   be a random variable with symmetric Bernoulli distribution (or Rademacher distribution). That is,   takes values   and   with probabilities   each. Since  , it follows that and hence   is a subgaussian random variable.

Bounded distributions

edit
 
Some commonly used bounded distributions.

Bounded distributions have no tail at all, so clearly they are subgaussian.

If   is bounded within the interval  , Hoeffding's lemma states that  . Hoeffding's inequality is the Chernoff bound obtained using this fact.

Convolutions

edit
 
Density of a mixture of three normal distributions (μ = 5, 10, 15, σ = 2) with equal weights. Each component is shown as a weighted density (each integrating to 1/3)

Since the sum of subgaussian random variables is still subgaussian, the convolution of subgaussian distributions is still subgaussian. In particular, any convolution of the normal distribution with any bounded distribution is subgaussian.

Mixtures

edit

Given subgaussian distributions  , we can construct an additive mixture   as follows: first randomly pick a number  , then pick  .

Since  we have  , and so the mixture is subgaussian.

In particular, any gaussian mixture is subgaussian.

More generally, the mixture of infinitely many subgaussian distributions is also subgaussian, if the subgaussian norm has a finite supremum:  .

Subgaussian random vectors

edit

So far, we have discussed subgaussianity for real-valued random variables. We can also define subgaussianity for random vectors. The purpose of subgaussianity is to make the tails decay fast, so we generalize accordingly: a subgaussian random vector is a random vector where the tail decays fast.

Let   be a random vector taking values in  .

Define.

  •  , where   is the unit sphere in  .
  •   is subgaussian iff  .

Theorem. (Theorem 3.4.6 [2]) For any positive integer  , the uniformly distributed random vector   is subgaussian, with  .

This is not so surprising, because as  , the projection of   to the first coordinate converges in distribution to the standard normal distribution.

Maximum inequalities

edit

Proposition. If   are mean-zero subgaussians, with  , then for any  , we have   with probability  .

Proof. By the Chernoff bound,  . Now apply the union bound.

Proposition. (Exercise 2.5.10 [2]) If   are subgaussians, with  , then  Further, the bound is sharp, since when   are IID samples of   we have  .[9]

[10]

Theorem. (over a finite set) If   are subgaussian, with  , then Theorem. (over a convex polytope) Fix a finite set of vectors  . If   is a random vector, such that each  , then the above 4 inequalities hold, with   replacing  .

Here,   is the convex polytope spanned by the vectors  .

Theorem. (over a ball) If   is a random vector in  , such that   for all   on the unit sphere  , then  For any  , with probability at least  , 

Inequalities

edit

Theorem. (Theorem 2.6.1 [2]) There exists a positive constant   such that given any number of independent mean-zero subgaussian random variables  ,  Theorem. (Hoeffding's inequality) (Theorem 2.6.3 [2]) There exists a positive constant   such that given any number of independent mean-zero subgaussian random variables  , Theorem. (Bernstein's inequality) (Theorem 2.8.1 [2]) There exists a positive constant   such that given any number of independent mean-zero subexponential random variables  ,  Theorem. (Khinchine inequality) (Exercise 2.6.5 [2]) There exists a positive constant   such that given any number of independent mean-zero variance-one subgaussian random variables  , any  , and any  , 

Hanson-Wright inequality

edit

The Hanson-Wright inequality states that if a random vector   is subgaussian in a certain sense, then any quadratic form   of this vector,  , is also subgaussian/subexponential. Further, the upper bound on the tail of  , is uniform.

A weak version of the following theorem was proved in (Hanson, Wright, 1971).[11] There are many extensions and variants. Much like the central limit theorem, the Hanson-Wright inequality is more a cluster of theorems with the same purpose, than a single theorem. The purpose is to take a subgaussian vector and uniformly bound its quadratic forms.

Theorem.[12][13] There exists a constant  , such that:

Let   be a positive integer. Let   be independent random variables, such that each satisfies  . Combine them into a random vector  . For any   matrix  , we have where  , and   is the Frobenius norm of the matrix, and   is the operator norm of the matrix.

In words, the quadratic form   has its tail uniformly bounded by an exponential, or a gaussian, whichever is larger.


In the statement of the theorem, the constant   is an "absolute constant", meaning that it has no dependence on  . It is a mathematical constant much like pi and e.

Consequences

edit

Theorem (subgaussian concentration).[12] There exists a constant  , such that:

Let   be positive integers. Let   be independent random variables, such that each satisfies  . Combine them into a random vector  . For any   matrix  , we have In words, the random vector   is concentrated on a spherical shell of radius  , such that   is subgaussian, with subgaussian norm  .

See also

edit

Notes

edit
  1. ^ Wainwright MJ. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge: Cambridge University Press; 2019. doi:10.1017/9781108627771, ISBN 9781108627771.
  2. ^ a b c d e f g Vershynin, R. (2018). High-dimensional probability: An introduction with applications in data science. Cambridge: Cambridge University Press.
  3. ^ Kahane, J. (1960). "Propriétés locales des fonctions à séries de Fourier aléatoires". Studia Mathematica. 19: 1–25. doi:10.4064/sm-19-1-1-25.
  4. ^ Buldygin, V. V.; Kozachenko, Yu. V. (1980). "Sub-Gaussian random variables". Ukrainian Mathematical Journal. 32 (6): 483–489. doi:10.1007/BF01087176.
  5. ^ a b Bobkov, S. G.; Chistyakov, G. P.; Götze, F. (2023-08-03). "Strictly subgaussian probability distributions". arXiv:2308.01749 [math.PR].
  6. ^ Marchal, Olivier; Arbel, Julyan (2017). "On the sub-Gaussianity of the Beta and Dirichlet distributions". Electronic Communications in Probability. 22. arXiv:1705.00048. doi:10.1214/17-ECP92.
  7. ^ Arbel, Julyan; Marchal, Olivier; Nguyen, Hien D. (2020). "On strict sub-Gaussianity, optimal proxy variance and symmetry for bounded random variables". Esaim: Probability and Statistics. 24: 39–55. arXiv:1901.09188. doi:10.1051/ps/2019018.
  8. ^ Barreto, Mathias; Marchal, Olivier; Arbel, Julyan (2024). "Optimal sub-Gaussian variance proxy for truncated Gaussian and exponential random variables". arXiv:2403.08628 [math.ST].
  9. ^ Kamath, Gautam. "Bounds on the expectation of the maximum of samples from a gaussian." (2015)
  10. ^ "MIT 18.S997 | Spring 2015 | High-Dimensional Statistics, Chapter 1. Sub-Gaussian Random Variables" (PDF). MIT OpenCourseWare. Retrieved 2024-04-03.
  11. ^ Hanson, D. L.; Wright, F. T. (1971). "A Bound on Tail Probabilities for Quadratic Forms in Independent Random Variables". The Annals of Mathematical Statistics. 42 (3): 1079–1083. doi:10.1214/aoms/1177693335. ISSN 0003-4851. JSTOR 2240253.
  12. ^ a b Rudelson, Mark; Vershynin, Roman (January 2013). "Hanson-Wright inequality and sub-gaussian concentration". Electronic Communications in Probability. 18 (none): 1–9. arXiv:1306.2872. doi:10.1214/ECP.v18-2865. ISSN 1083-589X.
  13. ^ Vershynin, Roman (2018). "6. Quadratic Forms, Symmetrization, and Contraction". High-Dimensional Probability: An Introduction with Applications in Data Science. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press. pp. 127–146. doi:10.1017/9781108231596.009. ISBN 978-1-108-41519-4.

References

edit








ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://en.m.wikipedia.org/wiki/Sub-Gaussian_distribution

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy