Tuesday, June 6, 2017

Exploring MLE and Maximum Likelihood Estimation

Maximizing the Likelihood

Maximum likelihood estimation chooses a sample statistic that maximizes the likelihood (probability function) of occurrence of sample for a particular parameter. Maximum likelihood estimation is based on the concept of Maxima and Minima. Through maximum likelihood estimator (MLE) an estimator is chosen; this estimator maximizes the likelihood of occurrence of the sample. The probability function of the occurrence of the sample is maximized by taking the derivative of the  log of the likelihood function (with respect to the parameter) and equating it to zero. This gives an estimator ( based on the sample) of the parameter. The second derivative of this loglikelihood function will be less than zero for this MLE. We also know that the probability function of this sample is based on a population parameter. Population is unknown and so is the population parameter. But the sample is known and we try to find the estimator based on the sample. This estimator maximizes the probability (likelihood) of this sample. For example 
Xi~B(n, P) that is X follows Binomial with parameter n and P. Here i = 1, 2, ....m
Then the MLE of P is  Sum of Xi (over all m)/nm and is given in the image below. This will be illustrated with an example in the next blog.

No comments:

Post a Comment