Gamma Function and Gamma Distribution

The gamma function crops up almost everywhere in mathematics. It has applications in many branches of mathematics including probability and statistics. This post gives a small demonstration of the importance of the gamma function through the gamma distribution, a probability distribution that naturally arises from the gamma function.

The following is a short classic book on Gamma function by Emil Artin (a copy can be found here).

The gamma function dated back to Euler (1707-1783), who, in a letter to Christian Goldbach (1690-1764) in January 8, 1730, discussed the following integral.

    \displaystyle \int_0^1 \ \biggl[\text{ln} \biggl(\frac{1}{t} \biggr) \biggr]^{x-1} \ dt

The integral converges for all positive real number x. Thus this integral can be regarded as a function with the domain of positive real numbers. Later in 1809, Adrien-Marie Legendre (1752-1833) named the function Gamma with the symbol \Gamma(\cdot). Thus

    \displaystyle \Gamma(x)=\int_0^1 \ \biggl[\text{ln} \biggl(\frac{1}{t} \biggr) \biggr]^{x-1} \ dt=\int_0^1 \ (- \text{ln} \ t)^{x-1} \ dt \ \ \ \ \ \ x>0

A simple substitution of t \rightarrow - \text{ln} \ t results in the following useful alternative formulation.

    \displaystyle \Gamma(x)=\int_0^\infty t^{x-1} \ e^{-t} \ dt

Clearly, \Gamma(1)=1. Using integration by parts derives the following functional relationship.

    \displaystyle \Gamma(x+1)=x \ \Gamma(x)

It follows from this functional relationship that \Gamma(n)=(n-1)! for all positive integer n. Thus the Gamma function extends the factorial function. In fact, this functional relationship is used to extend the gamma function beyond x>0.

Gamma Distribution

One important consequence of the gamma function is that a probability distribution arises naturally from it. Taking the integral form of the gamma function (the one that is the result of a simple substitution) and dividing it by the gamma function value yields an integral with a value of 1.

    \displaystyle \int_0^\infty \frac{1}{\Gamma(\alpha)} \ t^{\alpha-1} \ e^{-t} \ dt=1

Then the integrand can be regarded as a probability density function (PDF).

    \displaystyle f(x)=\frac{1}{\Gamma(\alpha)} \ x^{\alpha-1} \ e^{-x} \ \ \ \ x>0

Replacing the upper limit of the integral by a variable would produce its cumulative distribution (CDF).

    \displaystyle F(x)=\int_0^x \frac{1}{\Gamma(\alpha)} \ t^{\alpha-1} \ e^{-t} \ dt

The mathematical properties of the gamma function is discussed here in a companion blog. The gamma distribution is defined in this blog post in the same companion blog.

The PDF f(x) and the CDF F(x) shown above has only one parameter \alpha, which is a positive constant that determines the shape of the distribution (called the shape parameter). Another parameter \theta, called the scale parameter, can be added to make this a two-parameter distribution, hence making it more versatile as a probability model.

    \displaystyle f(x)=\frac{1}{\Gamma(\alpha)} \ \frac{1}{\theta^\alpha} \ x^{\alpha-1} \ e^{-x/\theta} \ \ \ \ x>0

    \displaystyle F(x)=\int_0^x \frac{1}{\Gamma(\alpha)} \ \frac{1}{\theta^\alpha} \ t^{\alpha-1} \ e^{-t/\theta} \ dt \ \ \ \ x>0

The gamma distribution is useful in actuarial modeling, e.g. modeling insurance losses. Due to its mathematical properties, there is considerable flexibility in the modeling process. For example, since it has two parameters (a scale parameter and a shape parameter), the gamma distribution is capable of representing a variety of distribution shapes and dispersion patterns.

The exponential distribution is a special case of the gamma distribution and it arises naturally as the waiting time between two events in a Poisson process (see here and here).

The chi-squared distribution is also a sub family of the gamma family of distributions. Mathematically speaking, a chi-squared distribution is a gamma distribution with shape parameter k/2 and scale parameter 2 with k being a positive integer (called the degrees of freedom). Though the definition is simple mathematically, the chi-squared family plays an outsize role in statistics.

This blog post discusses the chi-square distribution from a mathematical standpoint. The chi-squared distribution also play important roles in inferential statistics for the population mean and population variance of normal populations (discussed here).

The chi-squared distribution also figures prominently in the inference on categorical data. The chi-squared test, based on the chi-squared distribution, is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in one or more categories. The chi-squared test is based on the chi-squared statistic, which has three different interpretations – goodness-of-fit test, test of homogeneity and test of independence.Further discussion of the chi-squared test is found here.

Transformed Gamma Distribution

Another set of distributions that are derived from the gamma family is through raising a gamma distribution to a power. Raising a gamma distribution to a positive power results in a transformed gamma distribution. Raising a gamma distribution to -1 results in an inverse gamma distribution. Raising a gamma distribution to a negative power not -1 results in an inverse transformed gamma distribution. These derived distributions greatly expand the tool kit for actuarial modeling. These distributions are discussed here.

\text{ }

\text{ }

\text{ }

\copyright 2017 – Dan Ma