Philosophy Dictionary of Arguments

Home Screenshot Tabelle Begriffe

 
Probability theory: Probability theory is the branch of mathematics that deals with the analysis of random phenomena. It is used to model the uncertainty of random events. The probability of any event is between 0 and 1, the sum of the probabilities of all events in a sample space is 1. Probability theory is used in mathematics, statistics, physics, and engineering. See also Probability, Probability distribution, Probability functions, Predictions, Method, Knowledge.
_____________
Annotation: The above characterizations of concepts are neither definitions nor exhausting presentations of problems related to them. Instead, they are intended to give a short introduction to the contributions below. – Lexicon of Arguments.

 
Author Concept Summary/Quotes Sources

Peter Norvig on Probability Theory - Dictionary of Arguments

Norvig I 503
Probability theory/Norvig/Russell: Probability theory was invented as a way of analyzing games of chance. In about 850 A.D. the Indian mathematician Mahaviracarya described how to arrange a set of bets that can’t lose (what we now call a Dutch book). In Europe, the first significant systematic analyses were produced by Girolamo Cardano around 1565, although publication was posthumous (1663). By that time, probability had been established as a mathematical discipline due to a series of
Norvig I 504
results established in a famous correspondence between Blaise Pascal and Pierre de Fermat in 1654. As with probability itself, the results were initially motivated by gambling problems (…).
The first published textbook on probability was De Ratiociniis in Ludo Aleae (Huygens, 1657)(1). The “laziness and ignorance” view of uncertainty was described by John Arbuthnot in the preface of his translation of Huygens (Arbuthnot, 1692)(2): “It is impossible for a Die, with such determin’d force and direction, not to fall on such determin’d side, only I don’t know the force and direction which makes it fall on such determin’d side, and therefore I call it Chance, which is nothing but the want of art...”
Laplace (1816)(3) gave an exceptionally accurate and modern overview of probability; he was the first to use the example “take two urns, A and B, the first containing four white and two black balls, . . . ” The Rev. Thomas Bayes (1702–1761) introduced the rule for reasoning about conditional probabilities that was named after him (Bayes, 1763)(4). Bayes only considered the case of uniform priors; it was Laplace who independently developed the general case.
Kolmogorov (1950(5), first published in German in 1933) presented probability theory in a
rigorously axiomatic framework for the first time. Rényi (1970)(6) later gave an axiomatic presentation that took conditional probability, rather than absolute probability, as primitive.
Objectivism: Pascal used probability in ways that required both the objective interpretation, as a property
of the world based on symmetry or relative frequency, and the subjective interpretation, based on degree of belief—the former in his analyses of probabilities in games of chance, the latter in the famous “Pascal’s wager” argument about the possible existence of God. However, Pascal did not clearly realize the distinction between these two interpretations. The distinction was first drawn clearly by James Bernoulli (1654–1705).
Subjectivism: Leibniz introduced the “classical” notion of probability as a proportion of enumerated, equally probable cases, which was also used by Bernoulli, although it was brought to prominence by Laplace (1749–1827). This notion is ambiguous between the frequency interpretation and the subjective interpretation. The cases can be thought to be equally probable either because of a natural, physical symmetry between them, or simply because we do not have any knowledge that would lead us to consider one more probable than another.
Principle of indifference: The use of this latter, subjective consideration to justify assigning equal probabilities is known as the principle of indifference. The principle is often attributed to Laplace, but he never isolated the principle explicitly.
Principle of insufficient reason: George Boole and John Venn both referred to [the principle of indifference] as the principle of insufficient reason; the modern name is due to Keynes (1921)(7).
Objectivism/Subjectivism: The debate between objectivists and subjectivists became sharper in the 20th century. Kolmogorov (1963)(8), R. A. Fisher (1922)(9), and Richard von Mises (1928)(10) were advocates of the relative frequency interpretation.
Propensity: Karl Popper’s (1959(11), first published in German in 1934) “propensity” interpretation traces relative frequencies to an underlying physical symmetry.
Belief degree: Frank Ramsey (1931)(12), Bruno de Finetti (1937)(13), R. T. Cox (1946)(14), Leonard Savage (1954)(15), Richard Jeffrey (1983)(16), and E. T. Jaynes (2003)(17) interpreted probabilities as the degrees of belief of specific individuals. Their analyses of degree of belief were closely tied to utilities and to behavior - specifically, to the willingness to place bets.
Subjectivism: Rudolf Carnap, following Leibniz and Laplace, offered a different kind of subjective interpretation of probability - not as any actual individual’s degree of belief, but as the degree of belief that an idealized individual should have in a particular proposition a, given a particular body of evidence e.
Norvig I 505
Confirmation degree: Carnap attempted to go further than Leibniz or Laplace by making this notion of degree of confirmation mathematically precise, as a logical relation between a and e.
Induction/inductive Logic: The study of this relation was intended to constitute a mathematical discipline called inductive logic, analogous to ordinary deductive logic (Carnap, 1948(18), 1950(19)). Carnap was not able to extend his inductive logic much beyond the propositional case, and Putnam (1963)(20) showed by adversarial arguments that some fundamental difficulties would prevent a strict extension to languages capable of expressing arithmetic.
Uncertainty: Cox’s theorem (1946)(14) shows that any system for uncertain reasoning that meets his set of assumptions is equivalent to probability theory. This gave renewed confidence to those who already favored probability, but others were not convinced, pointing to the assumptions (primarily that belief must be represented by a single number, and thus the belief in ¬p must be a function of the belief in p). Halpern (1999)(21) describes the assumptions and shows some gaps in Cox’s original formulation. Horn (2003)(22) shows how to patch up the difficulties. Jaynes (2003)(17) has a similar argument that is easier to read. The question of reference classes is closely tied to the attempt to find an inductive logic.
Reference class problem: The approach of choosing the “most specific” reference class of sufficient size was formally proposed by Reichenbach (1949)(23). Various attempts have been made, notably by Henry Kyburg (1977(24), 1983(25)), to formulate more sophisticated policies in order to avoid some obvious fallacies that arise with Reichenbach’s rule, but such approaches remain somewhat ad hoc. More recent work by Bacchus, Grove, Halpern, and Koller (1992)(26) extends Carnap’s methods to first-order theories, thereby avoiding many of the difficulties associated with the straightforward reference-class method. Kyburg and Teng (2006)(27) contrast probabilistic inference with nonmonotonic logic. >Uncertainty/AI research.


1. Huygens, C. (1657). De ratiociniis in ludo aleae. In van Schooten, F. (Ed.), Exercitionum Mathematicorum. Elsevirii, Amsterdam. Translated into English by John Arbuthnot (1692
2. Arbuthnot, J. (1692). Of the Laws of Chance. Motte, London. Translation into English, with additions, of Huygens (1657).
3. Laplace, P. (1816). Essai philosophique sur les probabilit´es (3rd edition). Courcier Imprimeur,
Paris.
4. Bayes, T. (1763). An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London, 53, 370–418.
5. Kolmogorov, A. N. (1950). Foundations of the Theory of Probability. Chelsea.
6. Rényi, A. (1970). Probability Theory. Elsevier/North-Holland.
7. Keynes, J. M. (1921). A Treatise on Probability. Macmillan.
8. Kolmogorov, A. N. (1963). On tables of random numbers. Sankhya, the Indian Journal of Statistics,
Series A 25.
9. Fisher, R. A. (1922). On the mathematical foundations of theoretical statistics. Philosophical Transactions of the Royal Society of London, Series A 222, 309–368.
10. von Mises, R. (1928). Wahrscheinlichkeit, Statistik und Wahrheit. J. Springer
11. Popper, K. R. (1959). The Logic of Scientific Discovery. Basic Books.
12. Ramsey, F. P. (1931). Truth and probability. In Braithwaite, R. B. (Ed.), The Foundations of Mathematics and Other Logical Essays. Harcourt Brace Jovanovich.
13. de Finetti, B. (1937). Le prévision: ses lois logiques, ses sources subjectives. Ann. Inst.Poincaré, 7, 1-68.
14. Cox, R. T. (1946). Probability, frequency, and reasonable expectation. American Journal of Physics,
14(1), 1–13.
15. Savage, L. J. (1954). The Foundations of Statistics. Wiley.
16. Jeffrey, R. C. (1983). The Logic of Decision (2nd edition). University of Chicago Press.
17. Jaynes, E. T. (2003). Probability Theory: The Logic of Science. Cambridge Univ. Press.
18. Carnap, R. (1948). On the application of inductive logic. Philosophy and Phenomenological Research, 8, 133-148.
19. Carnap, R. (1950). Logical Foundations of Probability. University of Chicago Press
20. Putnam, H. (1963). ‘Degree of confirmation’ and inductive logic. In Schilpp, P. A. (Ed.), The Philosophy of Rudolf Carnap, pp. 270–292. Open Court.
21. Halpern, J. Y. (1999). Technical addendum, Cox’s theorem revisited. JAIR, 11, 429–435.
22. Horn, K. V. (2003). Constructing a logic of plausible inference: A guide to cox’s theorem. IJAR, 34,
3–24.
23. Reichenbach, H. (1949). The Theory of Probability: An Inquiry into the Logical and Mathematical
Foundations of the Calculus of Probability (second edition). University of California Press
24. Kyburg, H. E. (1977). Randomness and the right reference class. J. Philosophy, 74(9), 501-521.
25. Kyburg, H. E. (1983). The reference class. Philosophy of Science, 50, 374–397.
26. Bacchus, F., Grove, A., Halpern, J. Y., and Koller, D. (1992). From statistics to beliefs. In AAAI-92,
pp. 602-608.
27. Kyburg, H. E. and Teng, C.-M. (2006). Nonmonotonic logic and statistical inference. Computational
Intelligence, 22(1), 26-51.


_____________
Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments
The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010


Send Link
> Counter arguments against Norvig
> Counter arguments in relation to Probability Theory

Authors A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Y   Z  


Concepts A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  



Ed. Martin Schulz, access date 2024-04-25
Legal Notice   Contact   Data protection declaration