Philosophy Dictionary of Arguments

Home Screenshot Tabelle Begriffe

 
Multi-attribute utility: Multi-attribute utility (MAU) is a framework for making decisions when there are multiple factors to consider. It is based on the idea that we can assign a utility value to each factor and then weigh these values to determine the best decision.
_____________
Annotation: The above characterizations of concepts are neither definitions nor exhausting presentations of problems related to them. Instead, they are intended to give a short introduction to the contributions below. – Lexicon of Arguments.

 
Author Concept Summary/Quotes Sources

AI Research on Multi-attribute Utility - Dictionary of Arguments

Norvig I 622
Multi-attibute Utility /AI research/Norvig/Russell: Decision making in the field of public policy involves high stakes, in both money and lives. For example (…) [s]iting a new airport requires consideration
of the disruption caused by construction; the cost of land; the distance from centers of population; the noise of flight operations; safety issues arising from local topography and weather conditions; and so on. Problems like these, in which outcomes are characterized by two or more attributes, are handled by multi-attribute utility theory.
Norvig I 624
Preferences: Suppose we have n attributes, each of which has d distinct possible values. To specify the complete utility function U(x1, . . . , xn), we need dn values in the worst case. Now, the worst case corresponds to a situation in which the agent’s preferences have no regularity at all. Multiattribute
utility theory is based on the supposition that the preferences of typical agents have much more structure than that. The basic regularity that arises in deterministic preference structures is called preference independence. Two attributes X1 and X2 are preferentially independent of a third attribute X3 if the preference between outcomes (x1,x2,x3) and (x’1, x’2, x3) does not depend on the particular value x3 for attribute X3. E.g. one may propose that Noise and Cost are preferentially independent
Norvig I 625
deaths. We say that the set of attributes {Noise, Cost ,Deaths} exhibits mutual preferential independence (MPI). MPI says that, whereas each attribute may be important, it does not affect the way in which one trades off the other attributes against each other.
Uncertainty: (see Keeney and Raiffa (1976)(1). When uncertainty is present in the domain, we also need to consider the structure of preferences between lotteries and to understand the resulting properties of utility functions, rather than just value functions
Norvig I 626
The basic notion of utility independence extends preference independence to cover lotteries: a set of attributes X is utility independent of a set of attributes Y if preferences between lotteries on the attributes in X are independent of the particular values of the attributes in Y. A set of attributes is mutually utility independent (MUI) if each of its subsets is utility-independent of the remaining attributes. Again, it seems reasonable to propose that the airport attributes are MUI. MUI implies that the agent’s behavior can be described using a multiplicative utility function (Keeney, 1974)(2). >Decision Networks/Norvig
, >Information value/Norvig.
Norvig I 638
Keeney and Raiffa (1976)(1) give a thorough introduction to multi-attribute utility theory. They describe early computer implementations of methods for eliciting the necessary parameters for a multi-attribute utility function and include extensive accounts of real applications of the theory. In AI, the principal reference for MAUT is Wellman’s (1985)(3) paper, which includes a system called URP (Utility Reasoning Package) that can use a collection of statements about preference independence and conditional independence to analyze the structure of decision problems.

1. Keeney, R. L. and Raiffa, H. (1976). Decisions with Multiple Objectives: Preferences and Value radeoffs. Wiley.
2. Keeney, R. L. (1974). Multiplicative utility functions. Operations Research, 22, 22–34.
3. Wellman, M. P. (1985). Reasoning about preference models. Technical report MIT/LCS/TR-340, Laboratory for Computer Science, MIT.

_____________
Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments
The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.
AI Research
Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010


Send Link
> Counter arguments against AI Research
> Counter arguments in relation to Multi-attribute Utility

Authors A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Y   Z  


Concepts A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  



Ed. Martin Schulz, access date 2024-04-26
Legal Notice   Contact   Data protection declaration