Access to the full content is only available to members of institutions that have purchased access. If you belong to such an institution, please log in or find out more about how to order.




DOI: 10.4324/9780415249126-Q101-1
Version: v1,  Published online: 1998
Retrieved March 17, 2018, from

Article Summary

The discipline of statistics encompasses an extremely broad and heterogeneous set of problems and techniques. We deal here with the problems of statistical inference, which have to do with inferring from a body of sample data (for example, the observed results of tossing a coin or of drawing a number of balls at random from an urn containing balls of different colours) to some feature of the underlying distribution from which the sample is drawn (for example, the probability of heads when the coin is tossed, or the relative proportion of red balls in the urn).

There are two conflicting approaches to the foundations of statistical inference. The classical tradition derives from ideas of Ronald Fisher, Jerzy Neyman and Egon Pearson and embodies the standard treatments of hypothesis testing, confidence intervals and estimation found in many statistics textbooks. Classicists adopt a relative-frequency conception of probability and, except in special circumstances, eschew the assignment of probabilities to hypotheses, seeking instead a rationale for statistical inference in facts about the error characteristics of testing procedures. By contrast, the Bayesian tradition, so-called because of the central role it assigns to Bayes’ theorem, adopts a subjective or degree-of-belief conception of probability and represents the upshot of a statistical inference as a claim about how probable a statistical hypothesis is in the light of the evidence.

Citing this article:
Woodward, James. Statistics, 1998, doi:10.4324/9780415249126-Q101-1. Routledge Encyclopedia of Philosophy, Taylor and Francis,
Copyright © 1998-2018 Routledge.

Related Articles