Economics Dictionary of ArgumentsHome | |||
| |||
Preferences: Preferences are our relative likings for different things. They are shaped by our individual experiences, values, and goals. See also Actions, Action theory, Goals, Purposes, Experience, Values, Rationality._____________Annotation: The above characterizations of concepts are neither definitions nor exhausting presentations of problems related to them. Instead, they are intended to give a short introduction to the contributions below. – Lexicon of Arguments. | |||
Author | Concept | Summary/Quotes | Sources |
---|---|---|---|
Peter Norvig on Preferences - Dictionary of Arguments
Norvig I 613 Preferences/utility/decisions/decision theory/AI research/Norvig/Russell: (…) the axioms of utility theory are really axioms about preferences - they say nothing about a utility function. >Utility theory/Norvig. But in fact from the axioms of utility we can derive the following consequences (for the proof, see von Neumann and Morgenstern, 1944)(1): Existence of Utility Function: If an agent’s preferences obey the axioms of utility, then there exists a function U such that U(A) > U(B) if and only if A is preferred to B, and U(A) = U(B) if and only if the agent is indifferent between A and B. Expected Utility of a Lottery: The utility of a lottery is the sum of the probability of each outcome times the utility of that outcome. Norvig I 614 Maximize the expected utility: In other words, once the probabilities and utilities of the possible outcome states are specified, the utility of a compound lottery involving those states is completely determined. Because the outcome of a nondeterministic action is a lottery, it follows that an agent can act rationally - that is, consistently with its preferences - only by choosing an action that maximizes expected utility (…). Value function: As in game-playing, in a deterministic environment an agent just needs a preference ranking on states - the numbers don’t matter. This is called a value function or ordinal utility function. Rationality: an agent’s preference behavior does not necessarily mean that the agent is explicitly maximizing that utility function in its own deliberations. (…) rational behavior can be generated in any number of ways. By observing a rational agent’s preferences, however, an observer can construct the utility function that represents what the agent is actually trying to achieve (even if the agent doesn’t know it). >Utility/AI Research. Norvig I 615 [A preference] might be unusual, but we can’t call it irrational. Norvig I 619 Irrationality/Norvig/Russell: Decision theory is a normative theory: it describes how a rational agent should act. A descriptive theory, on the other hand, describes how actual agents (…) really do act. The application of economic theory would be greatly enhanced if the two coincided, but there appears to be some experimental evidence to the contrary. The evidence suggests that humans are “predictably irrational” (Ariely, 2009)(2). >Rationality/AI research, >Certainty effect/Kahneman/Tversky, >Ambiguity/Kahneman/Tversky, >Utility/AI research. Norvig I 637 The derivation of numerical utilities from preferences was first carried out by Ramsey (1931)(3); the axioms for preference in the present text are closer in form to those rediscovered in Theory of Games and Economic Behavior (von Neumann and Morgenstern, 1944)(4). A good presentation of these axioms, in the course of a discussion on risk preference, is given by Howard (1977)(5). Ramsey had derived subjective probabilities (not just utilities) from an agent’s preferences; Savage (1954)(6) and Jeffrey (1983)(7) carry out more recent constructions of this kind. Von Winterfeldt and Edwards (1986)(8) provide a modern perspective on decision analysis and its relationship to human preference structures. The micromort utility measure is discussed by Howard (1989)(9). 1. von Neumann, J. and Morgenstern, O. (1944). Theory of Games and Economic Behavior (first edition). Princeton University Press. 2. Ariely, D. (2009). Predictably Irrational (Revised edition). Harper. 3. Ramsey, F. P. (1931). Truth and probability. In Braithwaite, R. B. (Ed.), The Foundations of Mathematics and Other Logical Essays. Harcourt Brace Jovanovich. 4. von Neumann, J. and Morgenstern, O. (1944). Theory of Games and Economic Behavior (first edition). Princeton University Press. 5. Howard, R. A. (1977). Risk preference. In Howard, R. A. and Matheson, J. E. (Eds.), Readings in Decision Analysis, pp. 429–465. Decision Analysis Group, SRI International. 6. Savage, L. J. (1954). The Foundations of Statistics. Wiley. 7. Jeffrey, R. C. (1983). The Logic of Decision (2nd edition). University of Chicago Press. 8. von Winterfeldt, D. and Edwards,W. (1986). Decision Analysis and Behavioral Research. Cambridge University Press. 9. Howard, R. A. (1989). Microrisks for medical decision analysis. Int. J. Technology Assessment in Health Care, 5, 357–370._____________Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition. |
Norvig I Peter Norvig Stuart J. Russell Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010 |