Show Reference: "A New Look at the Statistical Model Identification"

A New Look at the Statistical Model Identification IEEE Transactions on Automatic Control, Vol. 19, No. 6. (06 December 1974), pp. 716-723, doi:10.1109/tac.1974.1100705 by Hirotugu Akaike
    abstract = {The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as the procedure for statistical model identification. The classical maximum likelihood estimation procedure is reviewed and a new estimate minimum information theoretical criterion ({AIC}) estimate ({MAICE}) which is designed for the purpose of statistical identification is introduced. When there are several competing models the {MAICE} is defined by the model and the maximum likelihood estimates of the parameters which give the minimum of {AIC} defined by {AIC} = (-2)log-(maximum likelihood) + 2(number of independently adjusted parameters within the model). {MAICE} provides a versatile procedure for statistical model identification which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure. The practical utility of {MAICE} in time series analysis is demonstrated with some numerical examples.},
    author = {Akaike, Hirotugu},
    citeulike-article-id = {849862},
    citeulike-linkout-0 = {},
    citeulike-linkout-1 = {\_all.jsp?arnumber=1100705},
    day = {06},
    doi = {10.1109/tac.1974.1100705},
    institution = {Institute of Statistical Mathematics, Minato-ku, Tokyo, Japan},
    issn = {0018-9286},
    journal = {IEEE Transactions on Automatic Control},
    keywords = {math, model-selection, statistics},
    month = dec,
    number = {6},
    pages = {716--723},
    posted-at = {2014-11-17 13:43:09},
    priority = {2},
    publisher = {IEEE},
    title = {A New Look at the Statistical Model Identification},
    url = {},
    volume = {19},
    year = {1974}

See the CiteULike entry for more info, PDF links, BibTex etc.

In hypothesis testing, we usually know that neither the null hypothesis nor the alternative hypothesis can be fully true. They can at best be an approximation to ie. different from reality. However, the procedure of hypothesis testing consists of testing which of the two is more likely to be true given a sample—not which of the two is the better approximation. Thus, strictly speaking, we're usually applying hypothesis testing to problems the theory was not designed for.

Akaike's information criterion is strongly linked to information theory and the maximum likelihood principle.