Half-normal distribution

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Half-normal distribution
Probability density function
Probability density function of the half-normal distribution σ=1
σ=1
Cumulative distribution function
Cumulative distribution function of the half-normal distribution σ=1
σ=1
Parameters σ>0 — (scale)
Support x[0,)
PDF f(x;σ)=2σπexp(x22σ2)x>0
CDF F(x;σ)=erf(xσ2)
Quantile Q(F;σ)=σ2erf1(F)
Mean σ2π0.797885σ
Median σ2erf1(1/2)0.674490σ
Mode 0
Variance σ2(12π)
Skewness 2(4π)(π2)3/20.9952717
Excess kurtosis 8(π3)(π2)20.869177
Entropy 12log2(2πeσ2)1
MGF exp(σ2t22)erfc(σt2)
CF w(σt2)
where w(x) is the Faddeeva function

In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution.

Let X follow an ordinary normal distribution, N(0,σ2). Then, Y=|X| follows a half-normal distribution. Thus, the half-normal distribution is a fold at the mean of an ordinary normal distribution with mean zero.

Properties

[edit | edit source]

Using the σ parametrization of the normal distribution, the probability density function (PDF) of the half-normal is given by

fY(y;σ)=2σπexp(y22σ2)y0,

where E[Y]=μ=σ2π.

Alternatively using a scaled precision (inverse of the variance) parametrization (to avoid issues if σ is near zero), obtained by setting θ=πσ2, the probability density function is given by

fY(y;θ)=2θπexp(y2θ2π)y0,

where E[Y]=μ=1θ.

The cumulative distribution function (CDF) is given by

FY(y;σ)=0y1σ2πexp(x22σ2)dx

Using the change-of-variables z=x/(2σ), the CDF can be written as

FY(y;σ)=2π0y/(2σ)exp(z2)dz=erf(y2σ),

where erf is the error function, a standard function in many mathematical software packages.

The quantile function (or inverse CDF) is written:

Q(F;σ)=σ2erf1(F)

where 0F1 and erf1 is the inverse error function

The expectation is then given by

E[Y]=σ2/π,

The variance is given by

var(Y)=σ2(12π).

Since this is proportional to the variance σ2 of X, σ can be seen as a scale parameter of the new distribution.

The differential entropy of the half-normal distribution is exactly one bit less the differential entropy of a zero-mean normal distribution with the same second moment about 0. This can be understood intuitively since the magnitude operator reduces information by one bit (if the probability distribution at its input is even). Alternatively, since a half-normal distribution is always positive, the one bit it would take to record whether a standard normal random variable were positive (say, a 1) or negative (say, a 0) is no longer necessary. Thus,

h(Y)=12log2(πeσ22)=12log2(2πeσ2)1.

Applications

[edit | edit source]

The half-normal distribution is commonly utilized as a prior probability distribution for variance parameters in Bayesian inference applications.[1][2]

Parameter estimation

[edit | edit source]

Given numbers {xi}i=1n drawn from a half-normal distribution, the unknown parameter σ of that distribution can be estimated by the method of maximum likelihood, giving

σ^=1ni=1nxi2

The bias is equal to

bE[(σ^mleσ)]=σ4n

which yields the bias-corrected maximum likelihood estimator

σ^mle*=σ^mleb^.
[edit | edit source]

See also

[edit | edit source]

References

[edit | edit source]
  1. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  2. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  3. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).

Further reading

[edit | edit source]
  • Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
[edit | edit source]
(note that MathWorld uses the parameter θ=1σπ/2