Problem 5: Unbiased and consistent estimators

10 minute read

Published:

What does it mean for an estimator to be unbiased? What about consistent? Give examples of an unbiased but not consistent estimator, as well as a biased but consistent estimator.

If you like my content, consider following my linkedin page to stay updated.

Concise answer:

An unbiased estimator is such that its expected value is the true value of the population parameter. A consistent estimator is such that it converges in probability to the true value of the parameter as we gather more samples.

Long answer:

When we have a population $S$ with a parameter $\theta$ we want to estimate, we propose estimators $\hat\theta$ . These estimators are random variables and as such, they have a distribution. If the expected value of this estimator is equal to the true value of the parameter, we say that the estimator is unbiased, otherwise we say it is biased. Mathematically, this can be written as $\mathbb{E}(\hat\theta)=\theta$. On the other hand, whether the estimators expected value is the true value of the paramter, it may still converge in probability to the true parameter. When this is the case, we say that the estimator is consistent.

Examples:

An unbiased estimator which is not consistent: suppose $X_1,\dots,X_n$ are i.i.d. samples with mean $\mu$ and consider the estimator of $\mu$ given by $\hat\mu = X_1$. Then $\mathbb{E}(\hat\mu) =\mu$ since $X_n$ has the same distribution for all $n$. On the other hand, $\hat\mu$ is not consistent:

\[\mathbb{P}(|X_1 - \mu| > \epsilon) > 0\]

and is independent of $n$, so it will not converge to zero.

A biased estimator which is consistent: suppose $X_1,\dots,X_n$ are i.i.d. samples with mean $\mu$ and variance $\sigma^2$, and consider the estimator of $\mu$ given by

\[\hat\mu = \dfrac{1}{n} + \dfrac{1}{n}\sum_{i=1}^n X_i .\]

Then $\mathbb{E}(\hat\mu)=\mu + \frac{1}{n}\neq\mu$ for all $n$, while by Markov’s inequality

\[\mathbb{P}\left\{\left( \dfrac{1}{n} + \dfrac{1}{n}\sum_{i=1}^n X_i - \mu\right)^2 > \epsilon^2\right\}\leq \dfrac{\mathbb{E}( \frac{1}{n} + \frac{1}{n}\sum_{i=1}^n X_i - \mu)^2}{\epsilon^2},\]

so we need to estimate the expectation of the square. Note that for this we can do

\[\mathbb{E}\left( \dfrac{1}{n} + \dfrac{1}{n}\sum_{i=1}^n X_i - \mu\right)^2 \leq \dfrac{1}{n^2}+\dfrac{2}{n}\mathbb{E} \left(\dfrac{1}{n}\sum_{i=1}^n X_i - \mu\right) + \mathbb{E}\left(\dfrac{1}{n}\sum_{i=1}^n X_i - \mu \right)^2.\]

The first term obviously converges to zero, while for the second we can see that the expectation of $X_i$ is $\mu$ so the whole term vanishes. For the last term, we can expand the square and see that

\[\newcommand{\var}{var} \mathbb{E}\left(\dfrac{1}{n}\sum_{i=1}^n X_i - \mu \right)^2= \var\left(\dfrac{1}{n}\sum_{i=1}^n X_i - \mu \right)=\var\left(\dfrac{1}{n}\sum_{i=1}^n X_i \right) =\dfrac{1}{n}\sigma^2\]

from which consistency follows.

Another nice example.

Leave a Comment