Likelihood Inference with Missing Data
Daowen Zhang
Likelihood inference plays a central role in the practice of
statistics. In the presence of missing data, likelihood inference is
often hampered by intractable integration in the evaluation of the
likelihood function for the observed data. The EM
(expectation-maximization) algorithm (Dempster, Laird and Rubin, 1977)
was proposed to obtain the maximum likelihood estimates of the model
parameters in a very attractive way: it updates the parameter
estimates in two usually much simpler steps: E and M steps and
guarantees an increase of the observed likelihood function. In this
seminar, I will describe likelihood inference when data are missing,
and I will go through the original EM algorithm and present some of
its variations: one step EM, ECM and SEM. One step EM uses one
iteration of Newton-Raphson in the M step, which is useful in the
situation where the M step is not easy to evaluate. ECM replaces the M
step by a series of M steps. SEM uses EM to obtain the asymptotic
variance of the MLEs. Some examples will be used to illustrate the EM
algorithm, and a list of references will be provided.