## General algorithms based on least squares calculations for maximum likelihood estimation in multiparameter models : a thesis presented in partial fulfilment of the requirements for the degree of Ph.D. in Statistics at Massey University

Loading...

##### Date

1985

##### Open Access Location

##### DOI

##### Authors

##### Editors

##### Journal Title

##### Journal ISSN

##### Volume Title

##### Publisher

Massey University

##### Abstract

This thesis develops algorithms for maximum likelihood estimation that can be implemented using a sequence of weighted least squares computations and examines their properties. Standard least squares algorithms are first described and their execution times, storage requirements and accuracies are compared. The Givens QR algorithm uses less storage than other algorithms of comparable accuracy and in good implementations is virtually as fast as them if there are several explanatory variables. A version that can be used for constrained least squares is described; it is used for least squares calculations in the remainder of the thesis. In many maximum likelihood problems, the likelihood can be written as a sum of functions called log-likelihood components and these often depend on the unknown parameters only through one or two quantities called systematic parts. For these models, a class of algorithms called NRL algorithms approaches the maximum likelihood estimate with a sequence of least squares calculations. For many common models, the Newton-Raphson algorithm and Fisher's scoring technique are particular NRL algorithms. Implementation of NRL algorithms is described in detail and the relative merits of the various NRL algorithms are discussed. If the NR algorithm is in the class, it converges best near the maximum likelihood estimate, but other NRL algorithms may perform better in the first few iterations. Several examples are analysed to illustrate the various possible methods. When the maximum likelihood estimates of some parameters can be written as explicit functions of the rest, the convergence of the NR and NRL algorithms can often be improved by adjusting these parameters between iterations. The relationship of this technique to elimination of these parameters from the likelihood is investigated. In several types of model, including nonlinear least squares, adjustment can be performed without slowing the NRL iterations. A related more general method is also described for improving NRL iterations when some parameters are linear and some are nonlinear in the systematic parts. Another general algorithm called the EM algorithm is described. It can be applied to several types of model for which the NRL algorithm cannot be used. In some models, it can also be implemented using a sequence of least squares calculations, but for applications where both EM and NRL algorithms can be used, the latter usually converge faster. Finally, in two appendices, Fortran subroutines that can be used to implement the algorithms in the thesis are described and listed.

##### Description

##### Keywords

Least squares, Algorithms