Access to the full content is only available to members of institutions that have purchased access. If you belong to such an institution, please log in or find out more about how to order.



DOI: 10.4324/9780415249126-Y026-1
Version: v1,  Published online: 1998
Retrieved July 23, 2024, from

Article Summary

Numbers are, in general, mathematical entities whose function is to express the size, order or magnitude of something or other. Historically, starting from the most basic kind of number, the positive integers (1, 2, 3,…), which appear in the earliest written records, the notion of number has been generalized and extended in several different directions – often in the face of considerable opposition.

Other than the positive integers, the most venerable are the rational numbers (fractions), which were known to the Egyptians and Mesopotamians. The discovery, by Pythagorean mathematicians, that there are lengths that cannot be expressed as fractions occasioned the introduction of irrational numbers, such as the square root of 2, though the Greeks managed only a geometric understanding of these. The number zero was recognized, first in Indian mathematics, by the seventh century; the use of negative numbers evolved after this time; and complex numbers, such as the square root of 1 , appeared first at the end of the Middle Ages. Infinitesimal numbers were developed by the founders of the calculus, Newton and Leibniz, in the seventeenth century (and were later to disappear from mathematics – for a time); and infinite numbers (ordinals and cardinals) were introduced by the founder of modern set theory, Cantor, in the nineteenth century.

The introductions of three of these kinds of number, in particular, occasioned crises in the foundations of mathematics. The first (concerning irrational numbers) was finally resolved in the nineteenth century by the work of Cauchy and Weierstrass. The second (concerning infinitesimals) was also resolved then, by the work of Weierstrass and Dedekind. The third (concerning infinite numbers), which involves paradoxes such as Russell’s, still awaits a convincing solution.

It is seemingly impossible to give a rigorous definition of what it is to be a number. The closest one can get is a family-resemblance notion, with very ill-defined boundaries.

Citing this article:
Priest, Graham. Numbers, 1998, doi:10.4324/9780415249126-Y026-1. Routledge Encyclopedia of Philosophy, Taylor and Francis,
Copyright © 1998-2024 Routledge.

Related Searches


Related Articles