AI Research on Game Theory - Dictionary of Arguments
Norvig I 666
Game theory/AI research/Norvig/Russell: problem: uncertainty that is not only due to the environment but to other agents and the decisions they make.
Information: basic concepts here are perfect and imperfect information.
Agent design: Game theory can analyze the agent’s decisions and compute the expected utility for each decision (under the assumption that other agents are acting optimally according to game theory).
Norvig I 667
Mechanism design: When an environment is inhabited by many agents, it might be possible to define the rules of the environment (i.e., the game that the agents must play) so that the collective good of all agents is maximized when each agent adopts the game-theoretic solution that maximizes its own utility.
Single-move games: A single-move game is defined by three components:
Players or agents who will be making decisions.
Actions that the players can choose
A payoff function that gives the utility to each player for each combination of actions by all the players. For single-move games the payoff function can be represented by a matrix, a representation known as the strategic form (also called normal form).
Strategy: Each player in a game must adopt and then execute a strategy (which is the name used in game theory for a policy). A pure strategy is a deterministic policy; for a single-move game, a pure strategy is just a single action. For many games an agent can do better with a mixed strategy, which is a randomized policy that selects actions according to a probability distribution.
Strategy profile: A strategy profile is an assignment of a strategy to each player; given E the strategy profile, the game’s outcome is a numeric value for each player.
Norvig I 668
Solution: A solution to a game is a strategy profile in which each player adopts a rational strategy. It is important to realize that outcomes are actual results of playing a game, while solutions are theoretical constructs used to analyze a game.
Rationality: (…) some games have a solution only in mixed strategies. But that does not mean that a player must literally be adopting a mixed strategy to be rational. Cf. >Prisoner’s dilemma.
Dominant strategy: We say that a strategy s for player p strongly dominates strategy s’ if the outcome for s is better for p than the outcome for s’, for every choice of strategies by the other player(s). Strategy s weakly dominates s’ if s is better than s’ on at least one strategy profile and no worse on any other. A dominant strategy is a strategy that dominates all others. It is irrational to play a dominated strategy,
Pareto-Optimum: we say that an outcome is Pareto optimal5 if there is no other outcome that all players would prefer. An outcome is Pareto dominated by another outcome if all players would prefer the other outcome.
Equilibrium: When each player has a dominant strategy, the combination of those strategies is called a dominant strategy equilibrium. In general, a strategy profile forms an equilibrium if no player can benefit by switching strategies, given that every other player sticks with the same
Norvig I 669
strategy. An equilibrium is essentially a local optimum in the space of policies; it is the top of a peak that slopes downward along every dimension, where a dimension corresponds to a player’s strategy choices.
Nash equilibrium: The mathematician John Nash (1928-) proved that every game has at least one equilibrium. Clearly, a dominant strategy equilibrium is a Nash equilibrium (…) but some games have Nash equilibria but no dominant strategies.
Norvig I 670
Coordination games: are games in which players need to communicate.
Maximin technique/von Neumann: first E picks her strategy and reveals it to O. Then O picks his strategy, with knowledge of E’s strategy. Finally, we evaluate the expected payoff of the game based on the chosen strategies. Let’s suppose this gives an outcome UE,O. Clearly, this game favors O, so the true utility U of the original game (from E’s point of view) is at least UE,O. For example, if we just look at pure strategies, the minimax game tree has a root value of −3 …) so we know that U ≥ −3. Now suppose we change the rules to force O to reveal his strategy first, followed by E. Then the minimax value of this game is UO,E, and because this game favors E we know that U is at most UO,E. With pure strategies, the value is +2 (…) so we know U ≤ +2.
Combining these two arguments, we see that the true utility U of the solution to the original game must satisfy UE,O ≤ U ≤ UO,E or in this case, − 3 ≤ U ≤ 2 .
Norvig I 679
Inverse game theory: design[ing] a game whose solutions, consisting of each agent pursuing its own rational strategy, result in the maximization of some global utility function. This problem is called mechanism design, or sometimes inverse game theory. Mechanism design is a staple of economics and political science. Capitalism 101 says that if everyone tries to get rich, the total wealth of society will increase. But (…) examples (..) show that proper mechanism design is necessary to keep the invisible hand on track. For collections of agents, mechanism design allows us to construct smart systems out of a collection of more limited systems—even uncooperative systems—in much the same way that teams of humans can achieve goals beyond the reach of any individual.
Examples of mechanism design include auctioning off cheap airline tickets, routing TCP packets between computers, deciding how medical interns will be assigned to hospitals, and deciding how robotic soccer players will cooperate with their teammates. Mechanism design became more than an academic subject in the 1990s when several nations, faced with the problem of auctioning off licenses to broadcast in various frequency bands, lost hundreds of millions of dollars in potential revenue as a result of poor mechanism design. Formally, a mechanism consists of (1) a language for describing the set of allowable strategies that agents may adopt, (2) a distinguished agent, called the center, that collects reports of strategy choices from the agents in the game, and (3) an outcome rule, known to all agents, that the center uses to determine the payoffs to each agent, given their strategy choices.
Norvig I 687
The roots of game theory can be traced back to proposals made in the 17th century by Christiaan Huygens and Gottfried Leibniz to study competitive and cooperative human interactions scientifically and mathematically. The first formal results in game theory are due to Zermelo (1913)(1) (who had, the year before, suggested a form of minimax search for games, albeit an incorrect one).
Emile Borel (1921)(2) introduced the notion of a mixed strategy. John von Neumann (1928)(3) proved that every two-person, zero-sum game has a maximin equilibrium in mixed strategies and a well-defined value. Von Neumann’s collaboration with the economist Oskar Morgenstern led to the publication in 1944 of the Theory of Games and Economic Behavior(4), the defining book for game theory.
1. Zermelo, E. (1913). Uber Eine Anwendung der Mengenlehre auf die Theorie des Schachspiels. In
Proc. Fifth International Congress of Mathematicians, Vol. 2, pp. 501-504.
2. Borel, E. (1921). La th´eorie du jeu et les équations intégrales à noyau symétrique. Comptes Rendus
Hebdomadaires des Séances de l’Académie des Sciences, 173, 1304-1308.
3. von Neumann, J. (1928). Zur Theorie der Gesellschaftsspiele. Mathematische Annalen, 100(295-320).
4. von Neumann, J. and Morgenstern, O. (1944). Theory of Games and Economic Behavior (first edition). Princeton University Press._____________Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010