Dictionary of Arguments


Philosophical and Scientific Issues in Dispute
 
[german]

Screenshot Tabelle Begriffes

 

Find counter arguments by entering NameVs… or …VsName.

Enhanced Search:
Search term 1: Author or Term Search term 2: Author or Term


together with


The author or concept searched is found in the following 11 entries.
Disputed term/author/ism Author
Entry
Reference
Backtracking Norvig Norvig I 87
Backtracking search/search algorithms/artificial intelligence/Norvig/Russell: Backtracking uses less memory [than depth-first search]. (…) only one successor is generated at a time rather than all successors; each partially expanded node remembers which successor to generate next. In this way, only O(m) memory is needed rather than O(bm) (hwere m is the maximum depth of any node). Backtracking search facilitates yet another memory-saving (and time-saving) trick: the idea of generating a successor by modifying the current state description directly rather than copying it first. This reduces the memory requirements to just one state description and O(m) actions. For this to work, we must be able to undo each modification when we go back to generate the next successor.
Norvig I 228
The idea of backtracking search goes back to Golomb and Baumert (1965)(1), and its application to constraint satisfaction is due to Bitner and Reingold (1975)(2), although they trace the basic algorithm back to the 19th century. Bitner and Reingold also introduced the MRV heuristic, which they called the most-constrained-variable heuristic. Brelaz (1979)(3) used the degree heuristic as a tiebreaker after applying the MRV heuristic. The resulting algorithm, despite its simplicity, is still the best method for k-coloring arbitrary graphs. Haralick and Elliot (1980)(4) proposed the least-constraining-value heuristic.
Norvig I 229
The basic back jumping method is due to John Gaschnig (1977(5), 1979(6)). Kondrak and van Beek (1997)(7) showed that this algorithm is essentially subsumed by forward checking. Conflict-directed back jumping was devised by Prosser (1993)(8). The most general and powerful form of intelligent backtracking was actually developed very early on by Stallman and Sussman (1977)(9). Their technique of dependency-directed backtracking led to the development of truth maintenance systems (Doyle, 1979)(10) (…). The connection between the two areas is analyzed by de Kleer (1989(11)).
For forward chaining, backward chaining: see >Software agents/Norvig.



1. Golomb, S. and Baumert, L. (1965). Backtrack proramming. JACM, 14, 516–524.
2. Bitner, J. R. and Reingold, E. M. (1975). Backtrack programming techniques. CACM, 18(11), 651–656.
3. Brelaz, D. (1979). New methods to color the vertices of a graph. CACM, 22(4), 251–256.
4. Haralick, R. M. and Elliot, G. L. (1980). Increasing tree search efficiency for constraint satisfaction problems. AIJ, 14(3), 263–313.
5. Gaschnig, J. (1977). A general backtrack algorithm that eliminates most redundant tests. In IJCAI-77, p. 457.
6. Gaschnig, J. (1979). Performance measurement and analysis of certain search algorithms. Technical report CMU-CS-79-124, Computer Science Department, Carnegie-Mellon University.
7. Kondrak, G. and van Beek, P. (1997). A theoretical evaluation of selected backtracking algorithms. AIJ, 89, 365–387.
8. Prosser, P. (1993). Hybrid algorithms for constraint satisfaction problems. Computational Intelligence, 9, 268–299.
9. Stallman, R.M. and Sussman, G. J. (1977). Forward reasoning and dependency-directed backtracking in a system for computer-aided circuit analysis. AIJ, 9(2), 135–196
10. Doyle, J. (1979). A truth maintenance system. AIJ, 12(3), 231–272.
11. de Kleer, J. (1989). A comparison of ATMS and CSP techniques. In IJCAI-89, Vol. 1, pp. 290–296.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Backtracking Russell Norvig I 87
Backtracking search/search algorithms/artificial intelligence/Norvig/Russell: Backtracking uses less memory [than depth-first search]. (…) only one successor is generated at a time rather than all successors; each partially expanded node remembers which successor to generate next. In this way, only O(m) memory is needed rather than O(bm). Backtracking search facilitates yet another memory-saving (and time-saving) trick: the idea of generating a successor by modifying the current state description directly rather than copying it first. This reduces the memory requirements to just one state description and O(m) actions. For this to work, we must be able to undo each modification when we go back to generate the next successor.
Norvig I 228
The idea of backtracking search goes back to Golomb and Baumert (1965)(1), and its application to constraint satisfaction is due to Bitner and Reingold (1975)(2), although they trace the basic algorithm back to the 19th century. Bitner and Reingold also introduced the MRV heuristic, which they called the most-constrained-variable heuristic. Brelaz (1979)(3) used the degree heuristic as a tiebreaker after applying the MRV heuristic. The resulting algorithm, despite its simplicity, is still the best method for k-coloring arbitrary graphs. Haralick and Elliot (1980)(4) proposed the least-constraining-value heuristic.
Norvig I 229
The basic back jumping method is due to John Gaschnig (1977(5), 1979(6)). Kondrak and van Beek (1997)(7) showed that this algorithm is essentially subsumed by forward checking. Conflict-directed back jumping was devised by Prosser (1993)(8). The most general and powerful form of intelligent backtracking was actually developed very early on by Stallman and Sussman (1977)(9). Their technique of dependency-directed backtracking led to the development of truth maintenance systems (Doyle, 1979)(10) (…). The connection between the two areas is analyzed by de Kleer (1989(11)).
For forward chaining, backward chaining: see >Software agents/Norvig.

1. Golomb, S. and Baumert, L. (1965). Backtrack proramming. JACM, 14, 516–524.
2. Bitner, J. R. and Reingold, E. M. (1975). Backtrack programming techniques. CACM, 18(11), 651–656.
3. Brelaz, D. (1979). New methods to color the vertices of a graph. CACM, 22(4), 251–256.
4. Haralick, R. M. and Elliot, G. L. (1980). Increasing tree search efficiency for constraint satisfaction problems. AIJ, 14(3), 263–313.
5. Gaschnig, J. (1977). A general backtrack algorithm that eliminates most redundant tests. In IJCAI-77, p. 457.
6. Gaschnig, J. (1979). Performance measurement and analysis of certain search algorithms. Technical report CMU-CS-79-124, Computer Science Department, Carnegie-Mellon University.
7. Kondrak, G. and van Beek, P. (1997). A theoretical evaluation of selected backtracking algorithms. AIJ, 89, 365–387.
8. Prosser, P. (1993). Hybrid algorithms for constraint satisfaction problems. Computational Intelligence, 9, 268–299.
9. Stallman, R.M. and Sussman, G. J. (1977). Forward reasoning and dependency-directed backtracking in a system for computer-aided circuit analysis. AIJ, 9(2), 135–196
10. Doyle, J. (1979). A truth maintenance system. AIJ, 12(3), 231–272.
11. de Kleer, J. (1989). A comparison of ATMS and CSP techniques. In IJCAI-89, Vol. 1, pp. 290–296.

Russell I
B. Russell/A.N. Whitehead
Principia Mathematica Frankfurt 1986

Russell II
B. Russell
The ABC of Relativity, London 1958, 1969
German Edition:
Das ABC der Relativitätstheorie Frankfurt 1989

Russell IV
B. Russell
The Problems of Philosophy, Oxford 1912
German Edition:
Probleme der Philosophie Frankfurt 1967

Russell VI
B. Russell
"The Philosophy of Logical Atomism", in: B. Russell, Logic and KNowledge, ed. R. Ch. Marsh, London 1956, pp. 200-202
German Edition:
Die Philosophie des logischen Atomismus
In
Eigennamen, U. Wolf (Hg) Frankfurt 1993

Russell VII
B. Russell
On the Nature of Truth and Falsehood, in: B. Russell, The Problems of Philosophy, Oxford 1912 - Dt. "Wahrheit und Falschheit"
In
Wahrheitstheorien, G. Skirbekk (Hg) Frankfurt 1996


Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010
Planning Norvig Norvig I 156
Planning/artificial intelligence/Norvig/Russell: The unpredictability and partial observability of real environments were recognized early on in robotics projects that used planning techniques, including Shakey (Fikes et al., 1972)(1) and (Michie, 1974)(2). The problems received more attention after the publication of McDermott’s (1978a) influential article, Planning and Acting(3). >Belief states/Norvig.
Norvig I 366
Problems: [a sinple] problem-solving agent (…) can find sequences of actions that result in a goal state. But it deals with atomic representations of states and thus needs good domain-specific heuristics to perform well. [A] hybrid propositional logical agent (…) can find plans without domain-specific heuristics because it uses domain-independent heuristics based on the logical structure of the problem. But it relies on ground (variable-free) propositional inference, which means that it may be swamped when there are many actions and states.
Norvig I 367
planning researchers have settled on a factored representation - one in which a state of the world is represented by a collection of variables. We use a language called PDDL, the Planning Domain Definition Language, that allows us to express all 4Tn2 actions with one action schema. Each state is represented as a conjunction of fluents that are ground, functionless atoms. Database semantics is used: the closed-world assumption means that any fluents that are not mentioned are false, and the unique names assumption means that [x] 1 and [x] 2 are distinct. Actions are described by a set of action schemas that implicitly define the ACTIONS(s) and RESULT(s, a) functions needed to do a problem-solving search. >Frame Problem. Classical planning concentrates on problems where most actions leave most things unchanged.
Actions: A set of ground (variable-free) actions can be represented by a single action schema.
The schema is a lifted representation—it lifts the level of reasoning from propositional logic to a restricted subset of first-order logic.
Action schema: The schema consists of the action name, a list of all the variables used in the schema, a precondition and an effect.
Norvig I 367
Forward/backward (progression/regression) state-space search: Cf. >Forward chaining, >backward chaining.
Norvig I 376
Heuristics for planning: a heuristic function h(s) estimates the distance from a state s to the goal and that if we can derive an admissible heuristic for this distance - one that does not overestimate - then we can use A∗ search to find optimal solutions. Representation: Planning uses a factored representation for states and action schemas. That makes it possible to define good domain-independent heuristics and for programs to automatically apply a good domain-independent heuristic for a given problem. Think of a search problem as a graph where the nodes are states and the edges are actions. The problem is to find a path connecting the initial state to a goal state. There are two ways we can relax this problem to make it easier: by adding more edges to the graph, making it strictly easier to find a path, or by grouping multiple nodes together, forming an abstraction of the state space that has fewer states, and thus is easier to search.
Norvig I 377
State abstraction: Many planning problems have 10100 states or more, and relaxing the actions does nothing to reduce the number of states. Therefore, we now look at relaxations that decrease the number of states by forming a state abstraction - a many-to-one mapping from states in the ground representation of the problem to the abstract representation. The easiest form of state abstraction is to ignore some fluents.
Norvig I 378
Heuristics: A key idea in defining heuristics is decomposition: dividing a problem into parts, solving each part independently, and then combining the parts. The subgoal independence assumption is that the cost of solving a conjunction of subgoals is approximated by the sum of the costs of solving each subgoal independently.
Norvig I 390
Planning as constraint satisfaction: >Constraint satisfaction problems.
Norvig I 393
History of AI planning: AI planning arose from investigations into state-space search, theorem proving, and control theory and from the practical needs of robotics, scheduling, and other domains. STRIPS (Fikes and Nilsson, 1971)(4), the first major planning system, illustrates the interaction of these influences.
General problem solver/GPS: the General Problem Solver (Newell and Simon, 1961)(5), [was] a state-space search system that used means–ends analysis. The control structure of STRIPS was modeled on that of GPS.
Norvig I 394
Language: The Problem Domain Description Language, or PDDL (Ghallab et al., 1998)(6), was introduced as a computer-parsable, standardized syntax for representing planning problems and has been used as the standard language for the International Planning Competition since 1998. There have been several extensions; the most recent version, PDDL 3.0, includes plan constraints and preferences (Gerevini and Long, 2005)(7). Subproblems: Problem decomposition was achieved by computing a subplan for each subgoal and then stringing the subplans together in some order. This approach, called linear planning by Sacerdoti (1975)(8), was soon discovered to be incomplete. It cannot solve some very simple problems (…).A complete planner must allow for interleaving of actions from different subplans within a single sequence. The notion of serializable subgoals (Korf, 1987)(9) corresponds exactly to the set of problems for which oninterleaved planners are complete. One solution to the interleaving problem was goal-regression planning, a technique in which steps in a totally ordered plan are reordered so as to avoid conflict between subgoals. This was introduced by Waldinger (1975)(10) and also used by Warren’s (1974)(11) WARPLAN.
Partial ordering: The ideas underlying partial-order planning include the detection of conflicts (Tate, 1975a)(12) and the protection of achieved conditions from interference (Sussman, 1975)(13). The construction of partially ordered plans (then called task networks) was pioneered by the NOAH planner (Sacerdoti, 1975(8), 1977(14)) and by Tate’s (1975b(15), 1977(16)) NONLIN system. Partial-order planning dominated the next 20 years of research (…).
State-space planning: The resurgence of interest in state-space planning was pioneered by Drew McDermott’s UNPOP program (1996)(17), which was the first to suggest the ignore-delete-list heuristic (…).Bonet and Geffner’s Heuristic Search Planner (HSP) and its later derivatives (Bonet and Geffner, 1999(18); Haslum et al., 2005(19); Haslum, 2006(20)) were the first to make
Norvig I 395
state-space search practical for large planning problems. The most successful state-space searcher to date is FF (Hoffmann, 2001(21); Hoffmann and Nebel, 2001(22); Hoffmann, 2005(23)), winner of the AIPS 2000 planning competition. (Richter and Westphal, 2008)(24), a planner based on FASTDOWNWARD with improved heuristics, won the 2008 competition. >Environment/world/planning/Norvig. See also McDermot (1885(25).

1. Fikes, R. E., Hart, P. E., and Nilsson, N. J. (1972). Learning and executing generalized robot plans. AIJ,3(4), 251-288
2. Michie, D. (1974). Machine intelligence at Edinburgh. In On Intelligence, pp. 143–155. Edinburgh
University Press.
3. McDermott, D. (1978a). Planning and acting. Cognitive Science, 2(2), 71-109.
4. Fikes, R. E. and Nilsson, N. J. (1993). STRIPS, a retrospective. AIJ, 59(1–2), 227-232.
5. Newell, A. and Simon, H. A. (1961). GPS, a program that simulates human thought. In Billing, H.
(Ed.), Lernende Automaten, pp. 109-124. R. Oldenbourg.
6. Ghallab, M., Howe, A., Knoblock, C. A., and Mc-Dermott, D. (1998). PDDL—The planning domain definition language. Tech. rep. DCS TR-1165, Yale Center for Computational Vision and Control
7. Gerevini, A. and Long, D. (2005). Plan constraints and preferences in PDDL3. Tech. rep., Dept. of Electronics for Automation, University of Brescia, Italy
8. Sacerdoti, E. D. (1975). The nonlinear nature of plans. In IJCAI-75, pp. 206-214.
9. Korf, R. E. (1987). Planning as search: A quantitative approach. AIJ, 33(1), 65-88
10. Waldinger, R. (1975). Achieving several goals simultaneously. In Elcock, E. W. and Michie, D.
(Eds.), Machine Intelligence 8, pp. 94-138. Ellis Horwood
11. Warren, D. H. D. (1974). WARPLAN: A System for Generating Plans. Department of Computational
Logic Memo 76, University of Edinburgh
12. Tate, A. (1975a). Interacting goals and their use. In IJCAI-75, pp. 215-218.
13. Sussman, G. J. (1975). A Computer Model of Skill Acquisition. Elsevier/North-Holland.
14. Sacerdoti, E. D. (1977). A Structure for Plans and Behavior. Elsevier/North-Holland.
15. Tate, A. (1975b). Using Goal Structure to Direct Search in a Problem Solver. Ph.D. thesis, University of Edinburgh.
16. Tate, A. (1977). Generating project networks. In IJCAI-77, pp. 888-893.
17. McDermott, D. (1996). A heuristic estimator for means-ends analysis in planning. In ICAPS-96, pp.
142-149.
18. Bonet, B. and Geffner, H. (1999). Planning as heuristic search: New results. In ECP-99, pp. 360-372. 19. Haslum, P., Bonet, B., and Geffner, H. (2005). New admissible heuristics for domain-independent planning. In AAAI-05.
20. Haslum, P. (2006). Improving heuristics through relaxed search – An analysis of TP4 and HSP*a in the
2004 planning competition. JAIR, 25, 233-267.
21. Hoffmann, J. (2001). FF: The fast-forward planning system. AIMag, 22(3), 57-62.
22. Hoffmann, J. and Nebel, B. (2001). The FF planning system: Fast plan generation through heuristic search. JAIR, 14, 253-302.
23. Hoffmann, J. (2005). Where “ignoring delete lists” works: Local search topology in planning benchmarks. JAIR, 24, 685-758
24. Richter, S. and Westphal, M. (2008). The LAMA planner. In Proc. International Planning Competition at ICAPS.
25. McDermott, D. (1985). Reasoning about plans. In Hobbs, J. and Moore, R. (Eds.), Formal theories of the commonsense world. Intellect Books.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Planning Russell Norvig I 156
Planning/artificial intelligence/Norvig/Russell: The unpredictability and partial observability of real environments were recognized early on in robotics projects that used planning techniques, including Shakey (Fikes et al., 1972)(1) and (Michie, 1974)(2). The problems received more attention after the publication of McDermott’s (1978a) influential article, Planning and Acting(3). >Belief states/Norvig.
Norvig I 366
Problems: [a sinple] problem-solving agent (…) can find sequences of actions that result in a goal state. But it deals with atomic representations of states and thus needs good domain-specific heuristics to perform well. [A] hybrid propositional logical agent (…) can find plans without domain-specific heuristics because it uses domain-independent heuristics based on the logical structure of the problem. But it relies on ground (variable-free) propositional inference, which means that it may be swamped when there are many actions and states.
Norvig I 367
planning researchers have settled on a factored representation - one in which a state of the world is represented by a collection of variables. We use a language called PDDL, the Planning Domain Definition Language, that allows us to express all 4Tn2 actions with one action schema. Each state is represented as a conjunction of fluents that are ground, functionless atoms. Database semantics is used: the closed-world assumption means that any fluents that are not mentioned are false, and the unique names assumption means that [x] 1 and [x] 2 are distinct. Actions are described by a set of action schemas that implicitly define the ACTIONS(s) and RESULT(s, a) functions needed to do a problem-solving search. >Frame Problem.
Classical planning concentrates on problems where most actions leave most things unchanged.
Actions: A set of ground (variable-free) actions can be represented by a single action schema.
The schema is a lifted representation—it lifts the level of reasoning from propositional logic to a restricted subset of first-order logic.
Action schema: The schema consists of the action name, a list of all the variables used in the schema, a precondition and an effect.
Norvig I 367
Forward/backward (progression/regression) state-space search Cf. >Forward chaining, >backward chaining.
Norvig I 376
Heuristics for planning: a heuristic function h(s) estimates the distance from a state s to the goal and that if we can derive an admissible heuristic for this distance - one that does not overestimate - then we can use A∗ search to find optimal solutions. Representation: Planning uses a factored representation for states and action schemas. That makes it possible to define good domain-independent heuristics and for programs to automatically apply a good domain-independent heuristic for a given problem. Think of a search problem as a graph where the nodes are states and the edges are actions. The problem is to find a path connecting the initial state to a goal state. There are two ways we can relax this problem to make it easier: by adding more edges to the graph, making it strictly easier to find a path, or by grouping multiple nodes together, forming an abstraction of the state space that has fewer states, and thus is easier to search.
Norvig I 377
State abstraction: Many planning problems have 10100 states or more, and relaxing the actions does nothing to reduce the number of states. Therefore, we now look at relaxations that decrease the number of states by forming a state abstraction - a many-to-one mapping from states in the ground representation of the problem to the abstract representation. The easiest form of state abstraction is to ignore some fluents.
Norvig I 378
Heuristics: A key idea in defining heuristics is decomposition: dividing a problem into parts, solving each part independently, and then combining the parts. The subgoal independence assumption is that the cost of solving a conjunction of subgoals is approximated by the sum of the costs of solving each subgoal independently.
Norvig I 390
Planning as constraint satisfaction >Constraint satisfaction problems.
Norvig I 393
History of AI planning: AI planning arose from investigations into state-space search, theorem proving, and control theory and from the practical needs of robotics, scheduling, and other domains. STRIPS (Fikes and Nilsson, 1971)(4), the first major planning system, illustrates the interaction of these influences.
General problem solver/GPS: the General Problem Solver (Newell and Simon, 1961)(5), [was] a state-space search system that used means–ends analysis. The control structure of STRIPS was modeled on that of GPS.
Norvig I 394
Language: The Problem Domain Description Language, or PDDL (Ghallab et al., 1998)(6), was introduced as a computer-parsable, standardized syntax for representing planning problems and has been used as the standard language for the International Planning Competition since 1998. There have been several extensions; the most recent version, PDDL 3.0, includes plan constraints and preferences (Gerevini and Long, 2005)(7). Subproblems: Problem decomposition was achieved by computing a subplan for each subgoal and then stringing the subplans together in some order. This approach, called linear planning by Sacerdoti (1975)(8), was soon discovered to be incomplete. It cannot solve some very simple problems (…).A complete planner must allow for interleaving of actions from different subplans within a single sequence. The notion of serializable subgoals (Korf, 1987)(9) corresponds exactly to the set of problems for which oninterleaved planners are complete. One solution to the interleaving problem was goal-regression planning, a technique in which steps in a totally ordered plan are reordered so as to avoid conflict between subgoals. This was introduced by Waldinger (1975)(10) and also used by Warren’s (1974)(11) WARPLAN.
Partial ordering: The ideas underlying partial-order planning include the detection of conflicts (Tate, 1975a)(12) and the protection of achieved conditions from interference (Sussman, 1975)(13). The construction of partially ordered plans (then called task networks) was pioneered by the NOAH planner (Sacerdoti, 1975(8), 1977(14)) and by Tate’s (1975b(15), 1977(16)) NONLIN system. Partial-order planning dominated the next 20 years of research (…).
State-space planning: The resurgence of interest in state-space planning was pioneered by Drew McDermott’s UNPOP program (1996)(17), which was the first to suggest the ignore-delete-list heuristic (…).Bonet and Geffner’s Heuristic Search Planner (HSP) and its later derivatives (Bonet and Geffner, 1999(18); Haslum et al., 2005(19); Haslum, 2006(20)) were the first to make
Norvig I 395
state-space search practical for large planning problems. The most successful state-space searcher to date is FF (Hoffmann, 2001(21); Hoffmann and Nebel, 2001(22); Hoffmann, 2005(23)), winner of the AIPS 2000 planning competition. (Richter and Westphal, 2008)(24), a planner based on FASTDOWNWARD with improved heuristics, won the 2008 competition. >Environment/world/planning/Norvig. See also McDermot (1885(25).
1. Fikes, R. E., Hart, P. E., and Nilsson, N. J. (1972). Learning and executing generalized robot plans. AIJ,3(4), 251-288
2. Michie, D. (1974). Machine intelligence at Edinburgh. In On Intelligence, pp. 143–155. Edinburgh
University Press.
3. McDermott, D. (1978a). Planning and acting. Cognitive Science, 2(2), 71-109.
4. Fikes, R. E. and Nilsson, N. J. (1993). STRIPS, a retrospective. AIJ, 59(1–2), 227-232.
5. Newell, A. and Simon, H. A. (1961). GPS, a program that simulates human thought. In Billing, H.
(Ed.), Lernende Automaten, pp. 109-124. R. Oldenbourg.
6. Ghallab, M., Howe, A., Knoblock, C. A., and Mc-Dermott, D. (1998). PDDL—The planning domain definition language. Tech. rep. DCS TR-1165, Yale Center for Computational Vision and Control
7. Gerevini, A. and Long, D. (2005). Plan constraints and preferences in PDDL3. Tech. rep., Dept. of Electronics for Automation, University of Brescia, Italy
8. Sacerdoti, E. D. (1975). The nonlinear nature of plans. In IJCAI-75, pp. 206-214.
9. Korf, R. E. (1987). Planning as search: A quantitative approach. AIJ, 33(1), 65-88
10. Waldinger, R. (1975). Achieving several goals simultaneously. In Elcock, E. W. and Michie, D.
(Eds.), Machine Intelligence 8, pp. 94-138. Ellis Horwood
11. Warren, D. H. D. (1974). WARPLAN: A System for Generating Plans. Department of Computational
Logic Memo 76, University of Edinburgh
12. Tate, A. (1975a). Interacting goals and their use. In IJCAI-75, pp. 215-218.
13. Sussman, G. J. (1975). A Computer Model of Skill Acquisition. Elsevier/North-Holland.
14. Sacerdoti, E. D. (1977). A Structure for Plans and Behavior. Elsevier/North-Holland.
15. Tate, A. (1975b). Using Goal Structure to Direct Search in a Problem Solver. Ph.D. thesis, University of Edinburgh.
16. Tate, A. (1977). Generating project networks. In IJCAI-77, pp. 888-893.
17. McDermott, D. (1996). A heuristic estimator for means-ends analysis in planning. In ICAPS-96, pp.
142-149.
18. Bonet, B. and Geffner, H. (1999). Planning as heuristic search: New results. In ECP-99, pp. 360-372. 19. Haslum, P., Bonet, B., and Geffner, H. (2005). New admissible heuristics for domain-independent planning. In AAAI-05.
20. Haslum, P. (2006). Improving heuristics through relaxed search – An analysis of TP4 and HSP*a in the
2004 planning competition. JAIR, 25, 233-267.
21. Hoffmann, J. (2001). FF: The fast-forward planning system. AIMag, 22(3), 57-62.
22. Hoffmann, J. and Nebel, B. (2001). The FF planning system: Fast plan generation through heuristic search. JAIR, 14, 253-302.
23. Hoffmann, J. (2005). Where “ignoring delete lists” works: Local search topology in planning benchmarks. JAIR, 24, 685-758
24. Richter, S. and Westphal, M. (2008). The LAMA planner. In Proc. International Planning Competition at ICAPS.
25. McDermott, D. (1985). Reasoning about plans. In Hobbs, J. and Moore, R. (Eds.), Formal theories of the commonsense world. Intellect Books.

Russell I
B. Russell/A.N. Whitehead
Principia Mathematica Frankfurt 1986

Russell II
B. Russell
The ABC of Relativity, London 1958, 1969
German Edition:
Das ABC der Relativitätstheorie Frankfurt 1989

Russell IV
B. Russell
The Problems of Philosophy, Oxford 1912
German Edition:
Probleme der Philosophie Frankfurt 1967

Russell VI
B. Russell
"The Philosophy of Logical Atomism", in: B. Russell, Logic and KNowledge, ed. R. Ch. Marsh, London 1956, pp. 200-202
German Edition:
Die Philosophie des logischen Atomismus
In
Eigennamen, U. Wolf (Hg) Frankfurt 1993

Russell VII
B. Russell
On the Nature of Truth and Falsehood, in: B. Russell, The Problems of Philosophy, Oxford 1912 - Dt. "Wahrheit und Falschheit"
In
Wahrheitstheorien, G. Skirbekk (Hg) Frankfurt 1996


Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010
Search Algorithms Norvig Norvig I 64
Search Algorithms/Russell/Norvig: uninformed search algorithms [are] algorithms that are given no information about the problem other than its definition. Although some of these algorithms can solve any solvable problem, none of them can do so efficiently. Informed search algorithms, on the other hand, can do quite well given some guidance on where to look for solutions.
A. Uninformed search.
Norvig I 81
Breadth-first search is a simple strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on. In general, all the nodes are expanded at a given depth in the search tree before any nodes at the next level are expanded.
Norvig I 83
The memory requirements are a bigger problem for breadth-first search than is the execution time. (…) exponential-complexity search problems cannot be solved by uninformed methods for any but the smallest instances.
Norvig I 85
Depth-first search always expands the deepest node in the current frontier of the search tree. Both versions are nonoptimal.
Norvig I 87
Backtracking search: uses less memory. (…) only one successor is generated at a time rather than all successors; each partially expanded node remembers which successor to generate next. In this way, only O(m) memory is needed rather than O(bm). Backtracking search facilitates yet another memory-saving (and time-saving) trick: the idea of generating a successor by modifying the current state description directly rather than copying it first. This reduces the memory requirements to just one state description and O(m) actions. For this to work, we must be able to undo each modification when we go back to generate the next successor. B. Informed (heuristic) search.
Norvig I 92
Best-first search is an instance of the general tree-search or graph-search algorithm in which a node is selected for expansion based on an evaluation function, f(n). The evaluation function is construed as a cost estimate, so the node with the lowest evaluation is expanded first. Greedy best-first search tries to expand the node that is closest to the goal, on the grounds that this is likely to lead to a solution quickly. Thus, it evaluates nodes by using just the heuristic function; that is, f(n) = h(n).
Norvig I 108
A general tree search algorithm considers all possible paths to find a solution, whereas a graph search algorithm avoids consideration of redundant paths.
Norvig I 120
Online search: here, the agent is faced with a state space that is initially unknown and must be explored.
Norvig I 121
Local search algorithms: If the path to the goal does not matter, we might consider a different class of algorithms, ones that do not worry LOCAL SEARCH about paths at all. Local search algorithms operate using a single current node (rather than multiple paths) and generally move only to neighbors of that node. These algorithms have two key advantages: (1) they use very little memory - usually a constant amount; and (2) they can often find reasonable solutions in large or infinite (continuous) state spaces for which systematic algorithms are unsuitable.((s) For Problems: cf. >Local minima (local maxima; for a solution: >Simulated annealing).
Norvig I 125
Local beam search: (beam search is a path-based algorithm). The local beam search algorithm keeps track of k states rather than
Norvig I 126
just one. It begins with k randomly generated states. At each step, all the successors of all k states are generated. If anyone is a goal, the algorithm halts. Otherwise, it selects the k best successors from the complete list and repeats. >Genetic algorithms. And-Or search problem: see >Terminology/Norvig.
Norvig I 147
Online search: >Online search/Norvig.
Norvig I 154
Literature for local search: (Newton, 1671(1); Raphson, 1690(2)) can be seen as a very efficient local search method for continuous spaces in which gradient information is available.
Brent (1973)(3) is a classic reference for optimization algorithms that do not require such information.
Beam search, which we have presented as a local search algorithm, originated as a bounded-width variant of dynamic programming for speech recognition in the HARPY system (Lowerre, 1976)(4). A related algorithm is analyzed in depth by Pearl (1984(5)).
The topic of local search was reinvigorated in the early 1990s by surprisingly good results
for large constraint-satisfaction problems such as n-queens (Minton et al., 1992)(6) and logical reasoning (Selman et al., 1992)(7) and by the incorporation of randomness, multiple simultaneous searches, and other improvements.
Tabu serarch: a variant of hill climbing called tabu search has gained popularity
(Glover and Laguna, 1997)(8). This algorithm maintains a tabu list of k previously visited states that cannot be revisited; as well as improving efficiency when searching graphs, this list can allow the algorithm to escape from some local minima.
Stage algorithm: Another useful improvement on hill climbing is the stage algorithm (Boyan and Moore, 1998)(9). The idea is to use the local maxima found by random-restart hill climbing to get an idea of the overall shape of the landscape. >Constraint Satisfaction Problems/Norvig.
Norvig I 227
Constraint satisfaction problems (CSPs) represent a state with a set of variable/value pairs and represent the conditions for a solution by a set of constraints on the variables. Many important real-world problems can be described as CSPs. A number of inference techniques use the constraints to infer which variable/value pairs are consistent and which are not. These include node, arc, path, and k-consistency.
Backtracking search, a form of depth-first search, is commonly used for solving CSPs.
Inference can be interwoven with search.
The minimum-remaining-values and degree heuristics are domain-independent methods for deciding which variable to choose next in a backtracking search. The least constraining- value heuristic helps in deciding which value to try first for a given variable. Backtracking occurs when no legal assignment can be found for a variable. Conflict-directed backjumping backtracks directly to the source of the problem.
Local search using the min-conflicts heuristic has also been applied to constraint satisfaction
problems with great success.
For forward chaining, backward chaining: see >Software agents/Norvig.


1. Newton, I. (1664–1671). Methodus fluxionum et serierum infinitarum. Unpublished notes
2. Raphson, J. (1690). Analysis aequationum universalis. Apud Abelem Swalle, London.
3. Brent, R. P. (1973). Algorithms for minimization without derivatives. Prentice-Hall
4. Lowerre, B. T. (1976). The HARPY Speech Recognition System. Ph.D. thesis, Computer Science Department, Carnegie-Mellon University.
5. Pearl, J. (1984). Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-
Wesley.
6. Minton, S., Johnston, M. D., Philips, A. B., and Laird, P. (1992). Minimizing conflicts: A heuristic repair method for constraint satisfaction and scheduling problems. AIJ, 58(1–3), 161–205.
7. Selman, B., Levesque, H. J., and Mitchell, D. (1992). A new method for solving hard satisfiability problems. In AAAI-92, pp. 440–446.
8. Glover, F. and Laguna, M. (Eds.). (1997). Tabu search. Kluwer
9. Boyan, J. A. and Moore, A. W. (1998). Learning evaluation functions for global optimization and Boolean satisfiability. In AAAI-98

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Search Algorithms Russell Norvig I 64
Search Algorithms/Russell/Norvig: uninformed search algorithms [are] algorithms that are given no information about the problem other than its definition. Although some of these algorithms can solve any solvable problem, none of them can do so efficiently. Informed search algorithms, on the other hand, can do quite well given some guidance on where to look for solutions.

A. Uninformed search.
Norvig I 81
Breadth-first search is a simple strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on. In general, all the nodes are expanded at a given depth in the search tree before any nodes at the next level are expanded.
Norvig I 83
The memory requirements are a bigger problem for breadth-first search than is the execution time. (…) exponential-complexity search problems cannot be solved by uninformed methods for any but the smallest instances.
Norvig I 85
Depth-first search always expands the deepest node in the current frontier of the search tree. Both versions are nonoptimal.
Norvig I 87
Backtracking search: uses less memory. (…) only one successor is generated at a time rather than all successors; each partially expanded node remembers which successor to generate next. In this way, only O(m) memory is needed rather than O(bm). Backtracking search facilitates yet another memory-saving (and time-saving) trick: the idea of generating a successor by modifying the current state description directly rather than copying it first. This reduces the memory requirements to just one state description and O(m) actions. For this to work, we must be able to undo each modification when we go back to generate the next successor.
B. Informed (heuristic) search.
Norvig I 92
Best-first search is an instance of the general tree-search or graph-search algorithm in which a node is selected for expansion based on an evaluation function, f(n). The evaluation function is construed as a cost estimate, so the node with the lowest evaluation is expanded first. Greedy best-first search tries to expand the node that is closest to the goal, on the grounds that this is likely to lead to a solution quickly. Thus, it evaluates nodes by using just the heuristic function; that is, f(n) = h(n).
Norvig I 108
A general tree search algorithm considers all possible paths to find a solution, whereas a graph search algorithm avoids consideration of redundant paths.
Norvig I 120
Online search: here, the agent is faced with a state space that is initially unknown and must be explored.
Norvig I 121
Local search algorithms: If the path to the goal does not matter, we might consider a different class of algorithms, ones that do not worry LOCAL SEARCH about paths at all. Local search algorithms operate using a single current node (rather than multiple paths) and generally move only to neighbors of that node. These algorithms have two key advantages: (1) they use very little memory - usually a constant amount; and (2) they can often find reasonable solutions in large or infinite (continuous) state spaces for which systematic algorithms are unsuitable.((s) For Problems: cf. >Local minima (local maxima; for a solution: >Simulated annealing).
Norvig I 125
Local beam search: (beam search is a path-based algorithm). The local beam search algorithm keeps track of k states rather than
Norvig I 126
just one. It begins with k randomly generated states. At each step, all the successors of all k states are generated. If anyone is a goal, the algorithm halts. Otherwise, it selects the k best successors from the complete list and repeats. >Genetic algorithms.
And-Or search problem: see >Terminology/Norvig.
Norvig I 147
Online search: >Online search/Norvig.
Norvig I 154
Literature for local search: (Newton, 1671(1); Raphson, 1690(2)) can be seen as a very efficient local search method for continuous spaces in which gradient information is available.
Brent (1973)(3) is a classic reference for optimization algorithms that do not require such information.
Beam search, which we have presented as a local search algorithm, originated as a bounded-width variant of dynamic programming for speech recognition in the HARPY system (Lowerre, 1976)(4). A related algorithm is analyzed in depth by Pearl (1984(5)).
The topic of local search was reinvigorated in the early 1990s by surprisingly good results
for large constraint-satisfaction problems such as n-queens (Minton et al., 1992)(6) and logical reasoning (Selman et al., 1992)(7) and by the incorporation of randomness, multiple simultaneous searches, and other improvements.
Tabu serarch: a variant of hill climbing called tabu search has gained popularity (Glover and Laguna, 1997)(8). This algorithm maintains a tabu list of k previously visited states that cannot be revisited; as well as improving efficiency when searching graphs, this list can allow the algorithm to escape from some local minima.
Stage algorithm: Another useful improvement on hill climbing is the stage algorithm (Boyan and Moore, 1998)(9). The idea is to use the local maxima found by random-restart hill climbing to get an idea of the overall shape of the landscape.
>Constraint Satisfaction Problems/Norvig.
Norvig I 227
Constraint satisfaction problems (CSPs) represent a state with a set of variable/value pairs and represent the conditions for a solution by a set of constraints on the variables. Many important real-world problems can be described as CSPs. A number of inference techniques use the constraints to infer which variable/value pairs are consistent and which are not. These include node, arc, path, and k-consistency.
Backtracking search, a form of depth-first search, is commonly used for solving CSPs.
Inference can be interwoven with search.
The minimum-remaining-values and degree heuristics are domain-independent methods for deciding which variable to choose next in a backtracking search. The least constraining- value heuristic helps in deciding which value to try first for a given variable. Backtracking occurs when no legal assignment can be found for a variable. Conflict-directed backjumping backtracks directly to the source of the problem.
Local search using the min-conflicts heuristic has also been applied to constraint satisfaction problems with great success.
For forward chaining, backward chaining see >Software agents/Norvig.

1. Newton, I. (1664–1671). Methodus fluxionum et serierum infinitarum. Unpublished notes
2. Raphson, J. (1690). Analysis aequationum universalis. Apud Abelem Swalle, London.
3. Brent, R. P. (1973). Algorithms for minimization without derivatives. Prentice-Hall
4. Lowerre, B. T. (1976). The HARPY Speech Recognition System. Ph.D. thesis, Computer Science Department, Carnegie-Mellon University.
5. Pearl, J. (1984). Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-
Wesley.
6. Minton, S., Johnston, M. D., Philips, A. B., and Laird, P. (1992). Minimizing conflicts: A heuristic repair method for constraint satisfaction and scheduling problems. AIJ, 58(1–3), 161–205.
7. Selman, B., Levesque, H. J., and Mitchell, D. (1992). A new method for solving hard satisfiability problems. In AAAI-92, pp. 440–446.
8. Glover, F. and Laguna, M. (Eds.). (1997). Tabu search. Kluwer
9. Boyan, J. A. and Moore, A. W. (1998). Learning evaluation functions for global optimization and Boolean satisfiability. In AAAI-98

Russell I
B. Russell/A.N. Whitehead
Principia Mathematica Frankfurt 1986

Russell II
B. Russell
The ABC of Relativity, London 1958, 1969
German Edition:
Das ABC der Relativitätstheorie Frankfurt 1989

Russell IV
B. Russell
The Problems of Philosophy, Oxford 1912
German Edition:
Probleme der Philosophie Frankfurt 1967

Russell VI
B. Russell
"The Philosophy of Logical Atomism", in: B. Russell, Logic and KNowledge, ed. R. Ch. Marsh, London 1956, pp. 200-202
German Edition:
Die Philosophie des logischen Atomismus
In
Eigennamen, U. Wolf (Hg) Frankfurt 1993

Russell VII
B. Russell
On the Nature of Truth and Falsehood, in: B. Russell, The Problems of Philosophy, Oxford 1912 - Dt. "Wahrheit und Falschheit"
In
Wahrheitstheorien, G. Skirbekk (Hg) Frankfurt 1996


Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010
Software Agents AI Research Norvig I 64
Software Agents/artificial intelligence/Russell/Norvig: agents base their actions on a direct mapping from states to actions. Such agents cannot operate well in environments for which this mapping would be too large to store and would take too long to learn. Goal-based agents, on the other hand, consider future actions and the desirability of their outcomes.
Problem-solving agents use atomic representations, (…) that is, states of the world are considered as wholes, with no internal structure visible to the problem-solving algorithms.
Norvig I 4
Computer agents are expected to do more: operate autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and create and pursue goals. A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.
Norvig I 235
Knowledge based Agents/logical agents: The central component of a knowledge-based agent is its knowledge base, or KB. A knowledge base is a set of sentences. (Here “sentence” is used as a technical term. It is related but not identical to the sentences of English and other natural languages.) Each sentence is expressed in a language called a knowledge representation language and represents some assertion about the world.
Norvig I 240
Semantics: The semantics defines the truth of each sentence with respect to each possible world. Model: instead of “possible world” we need to be more precise and use the term model. Whereas possible worlds might be thought of as (potentially) real environments that the agent might or might not be in, models are mathematical abstractions, each of which simply fixes the truth or falsehood of every relevant sentence.
Norvig I 241
Knowledge Base: The KB can be thought of as a set of sentences or as a single sentence that asserts all the individual sentences. The KB is false in models that contradict what the agent knows (…).
Norvig I 242
Completeness: an inference algorithm is complete if it can derive any sentence that is entailed. Fortunately, there are complete inference procedures for logics that are sufficiently expressive to handle many knowledge bases. Real world: if [the knowledge base] KB is true in the real world, then any sentence α derived from KB by a sound inference procedure is also true in the real world. So, while an inference process operates on “syntax”—internal physical configurations such as bits in registers or patterns of electrical blips in brains - the process corresponds
Norvig I 243
to the real-world relationship whereby some aspect of the real world is the case by virtue of other aspects of the real world being the case. Grounding: grounding [is] the connection between logical reasoning processes and the real environment in which the agent exists. In particular, how do we know that KB is true in the real world? A simple answer is that the agent’s sensors create the connection. Cf. >Semantics, >Syntax; for the philosophical discussion see also >Facts/Wittgenstein, >States of Affairs/Wittgenstein, >Foundation/Philosophical theories.
Norvig I 257
Forward chaining: The forward-chaining algorithm (…) determines if a single proposition symbol q - the query - is entailed by a knowledge base of definite clauses. It begins from known facts (positive literals) in the knowledge base. If all the premises of an implication are known, then its conclusion is added to the set of known facts.
Norvig I 258
It is easy to see that forward chaining is sound: every inference is essentially an application of Modus Ponens. Forward chaining is also complete: every entailed atomic sentence will be derived. The easiest way to see this is to consider the final state of the inferred table (after the algorithm reaches a fixed point where no new inferences are possible). Cf. >Fixed points. Forward chaining is an example of the general concept of data-driven reasoning – that is, reasoning in which the focus of attention starts with the known data. It can be used within an agent to derive conclusions from incoming percepts, often without a specific query in mind.
Backward chaining: works backward from the query. If the query q is known to be true, then no work is needed. Otherwise, the algorithm finds those implications in the knowledge base whose conclusion is q. If all the premises of one of those implications can be proved true (by backward chaining), then q is true. Backward chaining is a form of goal-directed reasoning. It is useful for answering specific questions such as “What shall I do now?” and “Where are my keys?” Often, the cost of backward chaining is much less than linear in the size of the knowledge base, because the process touches only relevant facts.
Norvig I 275
History: John McCarthy’s paper “Programs with Common Sense” (McCarthy, 1958(1), 1968(2)) promulgated the notion of agents that use logical reasoning to mediate between percepts and actions. Allen Newell’s (1982)(3) article “The Knowledge Level” makes the case that rational agents can be described and analyzed at an abstract level defined by the knowledge they possess rather than the programs they run. The declarative and procedural approaches to AI are analyzed in depth by Boden (1977)(4). The debate was revived by, among others, Brooks (1991)(5) and Nilsson (1991)(6), and continues to this day (Shaparau et al., 2008)(7). Meanwhile, the declarative approach has spread into other areas of computer science such as networking (Loo et al., 2006)(8).
Norvig I 278
Current state: The current state of theoretical understanding is summarized by Achlioptas (2009)(9). The satisfiability threshold conjecture states that, for each k, there is a sharp satisfiability threshold rk, such that as the number of variables n→∞, instances below the threshold are satisfiable with probability 1, while those above the threshold are unsatisfiable with probability 1. The conjecture was not quite proved by Friedgut (1999)(10): a sharp threshold exists but its location might depend on n even as n → ∞. Despite significant progress in asymptotic analysis of the threshold location for large k (Achlioptas and Peres, 2004(11); Achlioptas et al., 2007(12)), all that can be proved for k=3 is that it lies in the range [3.52,4.51]. Current theory suggests that a peak in the run time of a SAT solver is not necessarily related to the satisfiability threshold, but instead to a phase transition in the solution distribution and structure of
SAT instances. Empirical results due to Coarfa et al. (2003)(13) support this view. In fact, algorithms such as survey propagation (Parisi and Zecchina, 2002(14); Maneva et al., 2007(15)) take advantage of special properties of random SAT instances near the satisfiability threshold and greatly outperform general SAT solvers on such instances.
Neural networks: The idea of building agents with propositional logic can be traced back to the seminal paper of McCulloch and Pitts (1943)(16), which initiated the field of neural networks. >Frame problem, >Environment/AI research, >Universe/AI research, >Decisions/AI research, >Uncertainty/AI research.

1. McCarthy, J. (1958). Programs with common sense. In Proc. Symposium on Mechanisation of
Thought Processes, Vol. 1, pp. 77–84.
2. McCarthy, J. (1968). Programs with common sense. In Minsky, M. L. (Ed.), Semantic Information
Processing, pp. 403–418. MIT Press.
3. Newell, A. (1982). The knowledge level. AIJ, 18(1), 82–127.
4. Boden, M. A. (1977). Artificial Intelligence and Natural Man. Basic Books
5. Brooks, R. A. (1991). Intelligence without representation. AIJ, 47(1–3), 139–159.
6. Nilsson, N. J. (1991). Logic and artificial intelligence. AIJ, 47(1–3), 31–56.
7. Shaparau, D., Pistore, M., and Traverso, P. (2008). Fusing procedural and declarative planning goals for nondeterministic domains. In AAAI-08.
8. Loo, B. T., Condie, T., Garofalakis, M., Gay, D. E., Hellerstein, J. M., Maniatis, P., Ramakrishnan, R.,
Roscoe, T., and Stoica, I. (2006). Declarative networking: Language, execution and optimization. In
SIGMOD-06.
9. Achlioptas, D. (2009). Random satisfiability. In Biere, A., Heule, M., van Maaren, H., and Walsh, T. (Eds.), Handbook of Satisfiability. IOS Press.
10. Friedgut, E. (1999). Necessary and sufficient conditions for sharp thresholds of graph properties, and
the k-SAT problem. J. American Mathematical Society, 12, 1017–1054.
11. Achlioptas, D. and Peres, Y. (2004). The threshold for random k-SAT is 2k log 2−o(k). J. American Mathematical Society, 17(4), 947–973.
12. Achlioptas, D., Naor, A., and Peres, Y. (2007). On the maximum satisfiability of random formulas.
JACM, 54(2).
13. Coarfa, C., Demopoulos, D., Aguirre, A., Subramanian, D., and Yardi, M. (2003). Random 3-SAT: The plot thickens. Constraints, 8(3), 243–261.
14. Parisi, M. M. G. and Zecchina, R. (2002). Analytic and algorithmic solution of random satisfiability problems. Science, 297, 812–815.
15. Maneva, E., Mossel, E., and Wainwright, M. J. (2007). A new look at survey propagation and its generalizations. JACM, 54(4).
16. McCulloch, W. S. and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity.
Bulletin of Mathematical Biophysics, 5, 115–137.


Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010
Software Agents Norvig Norvig I 64
Software Agents/artificial intelligence/Russell/Norvig: agents base their actions on a direct mapping from states to actions. Such agents cannot operate well in environments for which this mapping would be too large to store and would take too long to learn. Goal-based agents, on the other hand, consider future actions and the desirability of their outcomes.
Problem-solving agents use atomic representations, (…) that is, states of the world are considered as wholes, with no internal structure visible to the problem-solving algorithms.
Norvig I 4
Computer agents are expected to do more: operate autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and create and pursue goals. A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.
Norvig I 235
Knowledge based Agents/logical agents: The central component of a knowledge-based agent is its knowledge base, or KB. A knowledge base is a set of sentences. (Here “sentence” is used as a technical term. It is related but not identical to the sentences of English and other natural languages.) Each sentence is expressed in a language called a knowledge representation language and represents some assertion about the world.
Norvig I 240
Semantics: The semantics defines the truth of each sentence with respect to each possible world. Model: instead of “possible world” we need to be more precise and use the term model. Whereas possible worlds might be thought of as (potentially) real environments that the agent might or might not be in, models are mathematical abstractions, each of which simply fixes the truth or falsehood of every relevant sentence.
Norvig I 241
Knowledge Base: The KB can be thought of as a set of sentences or as a single sentence that asserts all the individual sentences. The KB is false in models that contradict what the agent knows (…).
Norvig I 242
Completeness: an inference algorithm is complete if it can derive any sentence that is entailed. Fortunately, there are complete inference procedures for logics that are sufficiently expressive to handle many knowledge bases. Real world: if [the knowledge base] KB is true in the real world, then any sentence α derived from KB by a sound inference procedure is also true in the real world. So, while an inference process operates on “syntax”—internal physical configurations such as bits in registers or patterns of electrical blips in brains - the process corresponds
Norvig I 243
to the real-world relationship whereby some aspect of the real world is the case by virtue of other aspects of the real world being the case. Grounding: grounding [is] the connection between logical reasoning processes and the real environment in which the agent exists. In particular, how do we know that KB is true in the real world? A simple answer is that the agent’s sensors create the connection. Cf. >Semantics, >Syntax; for the philosophical discussion see also >Facts/Wittgenstein, >States of Affairs/Wittgenstein, >Foundation/Philosophical theories.
Norvig I 257
Forward chaining: The forward-chaining algorithm (…) determines if a single proposition symbol q - the query - is entailed by a knowledge base of definite clauses. It begins from known facts (positive literals) in the knowledge base. If all the premises of an implication are known, then its conclusion is added to the set of known facts.
Norvig I 258
It is easy to see that forward chaining is sound: every inference is essentially an application of Modus Ponens. Forward chaining is also complete: every entailed atomic sentence will be derived. The easiest way to see this is to consider the final state of the inferred table (after the algorithm reaches a fixed point where no new inferences are possible). Cf. >Fixed points. Forward chaining is an example of the general concept of data-driven reasoning – that is, reasoning in which the focus of attention starts with the known data. It can be used within an agent to derive conclusions from incoming percepts, often without a specific query in mind.
Backward chaining: works backward from the query. If the query q is known to be true, then no work is needed. Otherwise, the algorithm finds those implications in the knowledge base whose conclusion is q. If all the premises of one of those implications can be proved true (by backward chaining), then q is true. Backward chaining is a form of goal-directed reasoning. It is useful for answering specific questions such as “What shall I do now?” and “Where are my keys?” Often, the cost of backward chaining is much less than linear in the size of the knowledge base, because the process touches only relevant facts.
Norvig I 275
History: John McCarthy’s paper “Programs with Common Sense” (McCarthy, 1958(1), 1968(2)) promulgated the notion of agents that use logical reasoning to mediate between percepts and actions. Allen Newell’s (1982)(3) article “The Knowledge Level” makes the case that rational agents can be described and analyzed at an abstract level defined by the knowledge they possess rather than the programs they run. The declarative and procedural approaches to AI are analyzed in depth by Boden (1977)(4). The debate was revived by, among others, Brooks (1991)(5) and Nilsson (1991)(6), and continues to this day (Shaparau et al., 2008)(7). Meanwhile, the declarative approach has spread into other areas of computer science such as networking (Loo et al., 2006)(8).
Norvig I 278
Current state: The current state of theoretical understanding is summarized by Achlioptas (2009)(9). The satisfiability threshold conjecture states that, for each k, there is a sharp satisfiability threshold rk, such that as the number of variables n→∞, instances below the threshold are satisfiable with probability 1, while those above the threshold are unsatisfiable with probability 1. The conjecture was not quite proved by Friedgut (1999)(10): a sharp threshold exists but its location might depend on n even as n → ∞. Despite significant progress in asymptotic analysis of the threshold location for large k (Achlioptas and Peres, 2004(11); Achlioptas et al., 2007(12)), all that can be proved for k=3 is that it lies in the range [3.52,4.51]. Current theory suggests that a peak in the run time of a SAT solver is not necessarily related to the satisfiability threshold, but instead to a phase transition in the solution distribution and structure of
SAT instances. Empirical results due to Coarfa et al. (2003)(13) support this view. In fact, algorithms such as survey propagation (Parisi and Zecchina, 2002(14); Maneva et al., 2007(15)) take advantage of special properties of random SAT instances near the satisfiability threshold and greatly outperform general SAT solvers on such instances.
Neural networks: The idea of building agents with propositional logic can be traced back to the seminal paper of McCulloch and Pitts (1943)(16), which initiated the field of neural networks. >Frame problem.

1. McCarthy, J. (1958). Programs with common sense. In Proc. Symposium on Mechanisation of
Thought Processes, Vol. 1, pp. 77–84.
2. McCarthy, J. (1968). Programs with common sense. In Minsky, M. L. (Ed.), Semantic Information
Processing, pp. 403–418. MIT Press.
3. Newell, A. (1982). The knowledge level. AIJ, 18(1), 82–127.
4. Boden, M. A. (1977). Artificial Intelligence and Natural Man. Basic Books
5. Brooks, R. A. (1991). Intelligence without representation. AIJ, 47(1–3), 139–159.
6. Nilsson, N. J. (1991). Logic and artificial intelligence. AIJ, 47(1–3), 31–56.
7. Shaparau, D., Pistore, M., and Traverso, P. (2008). Fusing procedural and declarative planning goals for nondeterministic domains. In AAAI-08.
8. Loo, B. T., Condie, T., Garofalakis, M., Gay, D. E., Hellerstein, J. M., Maniatis, P., Ramakrishnan, R.,
Roscoe, T., and Stoica, I. (2006). Declarative networking: Language, execution and optimization. In
SIGMOD-06.
9. Achlioptas, D. (2009). Random satisfiability. In Biere, A., Heule, M., van Maaren, H., and Walsh, T. (Eds.), Handbook of Satisfiability. IOS Press.
10. Friedgut, E. (1999). Necessary and sufficient conditions for sharp thresholds of graph properties, and
the k-SAT problem. J. American Mathematical Society, 12, 1017–1054.
11. Achlioptas, D. and Peres, Y. (2004). The threshold for random k-SAT is 2k log 2−o(k). J. American Mathematical Society, 17(4), 947–973.
12. Achlioptas, D., Naor, A., and Peres, Y. (2007). On the maximum satisfiability of random formulas.
JACM, 54(2).
13. Coarfa, C., Demopoulos, D., Aguirre, A., Subramanian, D., and Yardi, M. (2003). Random 3-SAT: The plot thickens. Constraints, 8(3), 243–261.
14. Parisi, M. M. G. and Zecchina, R. (2002). Analytic and algorithmic solution of random satisfiability problems. Science, 297, 812–815.
15. Maneva, E., Mossel, E., and Wainwright, M. J. (2007). A new look at survey propagation and its generalizations. JACM, 54(4).
16. McCulloch, W. S. and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity.
Bulletin of Mathematical Biophysics, 5, 115–137.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Software Agents Russell Norvig I 64
Software Agents/artificial intelligence/Russell/Norvig: agents base their actions on a direct mapping from states to actions. Such agents cannot operate well in environments for which this mapping would be too large to store and would take too long to learn. Goal-based agents, on the other hand, consider future actions and the desirability of their outcomes.
Problem-solving agents use atomic representations, (…) that is, states of the world are considered as wholes, with no internal structure visible to the problem-solving algorithms.
Norvig I 4
Computer agents are expected to do more: operate autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and create and pursue goals. A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.
Norvig I 235
Knowledge based Agents/logical agents: The central component of a knowledge-based agent is its knowledge base, or KB. A knowledge base is a set of sentences. (Here “sentence” is used as a technical term. It is related but not identical to the sentences of English and other natural languages.) Each sentence is expressed in a language called a knowledge representation language and represents some assertion about the world.
Norvig I 240
Semantics: The semantics defines the truth of each sentence with respect to each possible world. Model: instead of “possible world” we need to be more precise and use the term model. Whereas possible worlds might be thought of as (potentially) real environments that the agent might or might not be in, models are mathematical abstractions, each of which simply fixes the truth or falsehood of every relevant sentence.
Norvig I 241
Knowledge Base: The KB can be thought of as a set of sentences or as a single sentence that asserts all the individual sentences. The KB is false in models that contradict what the agent knows (…).
Norvig I 242
Completeness: an inference algorithm is complete if it can derive any sentence that is entailed. Fortunately, there are complete inference procedures for logics that are sufficiently expressive to handle many knowledge bases. Real world: if [the knowledge base] KB is true in the real world, then any sentence α derived from KB by a sound inference procedure is also true in the real world. So, while an inference process operates on “syntax”—internal physical configurations such as bits in registers or patterns of electrical blips in brains - the process corresponds
Norvig I 243
to the real-world relationship whereby some aspect of the real world is the case by virtue of other aspects of the real world being the case. Grounding: grounding [is] the connection between logical reasoning processes and the real environment in which the agent exists. In particular, how do we know that KB is true in the real world? A simple answer is that the agent’s sensors create the connection. Cf. >Semantics, >Syntax; for the philosophical discussion see also >Facts/Wittgenstein, >States of Affairs/Wittgenstein, >Foundation/Philosophical theories.
Norvig I 257
Forward chaining: The forward-chaining algorithm (…) determines if a single proposition symbol q - the query - is entailed by a knowledge base of definite clauses. It begins from known facts (positive literals) in the knowledge base. If all the premises of an implication are known, then its conclusion is added to the set of known facts.
Norvig I 258
It is easy to see that forward chaining is sound: every inference is essentially an application of Modus Ponens. Forward chaining is also complete: every entailed atomic sentence will be derived. The easiest way to see this is to consider the final state of the inferred table (after the algorithm reaches a fixed point where no new inferences are possible). Cf. >Fixed points.
Forward chaining is an example of the general concept of data-driven reasoning – that is, reasoning in which the focus of attention starts with the known data. It can be used within an agent to derive conclusions from incoming percepts, often without a specific query in mind.
Backward chaining: works backward from the query. If the query q is known to be true, then no work is needed. Otherwise, the algorithm finds those implications in the knowledge base whose conclusion is q. If all the premises of one of those implications can be proved true (by backward chaining), then q is true. Backward chaining is a form of goal-directed reasoning. It is useful for answering specific questions such as “What shall I do now?” and “Where are my keys?” Often, the cost of backward chaining is much less than linear in the size of the knowledge base, because the process touches only relevant facts.
Norvig I 275
History: John McCarthy’s paper “Programs with Common Sense” (McCarthy, 1958(1), 1968(2)) promulgated the notion of agents that use logical reasoning to mediate between percepts and actions. Allen Newell’s (1982)(3) article “The Knowledge Level” makes the case that rational agents can be described and analyzed at an abstract level defined by the knowledge they possess rather than the programs they run. The declarative and procedural approaches to AI are analyzed in depth by Boden (1977)(4). The debate was revived by, among others, Brooks (1991)(5) and Nilsson (1991)(6), and continues to this day (Shaparau et al., 2008)(7). Meanwhile, the declarative approach has spread into other areas of computer science such as networking (Loo et al., 2006)(8).
Norvig I 278
Current state: The current state of theoretical understanding is summarized by Achlioptas (2009)(9). The satisfiability threshold conjecture states that, for each k, there is a sharp satisfiability threshold rk, such that as the number of variables n→∞, instances below the threshold are satisfiable with probability 1, while those above the threshold are unsatisfiable with probability 1. The conjecture was not quite proved by Friedgut (1999)(10): a sharp threshold exists but its location might depend on n even as n → ∞. Despite significant progress in asymptotic analysis of the threshold location for large k (Achlioptas and Peres, 2004(11); Achlioptas et al., 2007(12)), all that can be proved for k=3 is that it lies in the range [3.52,4.51]. Current theory suggests that a peak in the run time of a SAT solver is not necessarily related to the satisfiability threshold, but instead to a phase transition in the solution distribution and structure of
SAT instances. Empirical results due to Coarfa et al. (2003)(13) support this view. In fact, algorithms such as survey propagation (Parisi and Zecchina, 2002(14); Maneva et al., 2007(15)) take advantage of special properties of random SAT instances near the satisfiability threshold and greatly outperform general SAT solvers on such instances.
Neural networks: The idea of building agents with propositional logic can be traced back to the seminal paper of McCulloch and Pitts (1943)(16), which initiated the field of neural networks.
>Frame problem.

1. McCarthy, J. (1958). Programs with common sense. In Proc. Symposium on Mechanisation of
Thought Processes, Vol. 1, pp. 77–84.
2. McCarthy, J. (1968). Programs with common sense. In Minsky, M. L. (Ed.), Semantic Information
Processing, pp. 403–418. MIT Press.
3. Newell, A. (1982). The knowledge level. AIJ, 18(1), 82–127.
4. Boden, M. A. (1977). Artificial Intelligence and Natural Man. Basic Books
5. Brooks, R. A. (1991). Intelligence without representation. AIJ, 47(1–3), 139–159.
6. Nilsson, N. J. (1991). Logic and artificial intelligence. AIJ, 47(1–3), 31–56.
7. Shaparau, D., Pistore, M., and Traverso, P. (2008). Fusing procedural and declarative planning goals for nondeterministic domains. In AAAI-08.
8. Loo, B. T., Condie, T., Garofalakis, M., Gay, D. E., Hellerstein, J. M., Maniatis, P., Ramakrishnan, R.,
Roscoe, T., and Stoica, I. (2006). Declarative networking: Language, execution and optimization. In
SIGMOD-06.
9. Achlioptas, D. (2009). Random satisfiability. In Biere, A., Heule, M., van Maaren, H., and Walsh, T. (Eds.), Handbook of Satisfiability. IOS Press.
10. Friedgut, E. (1999). Necessary and sufficient conditions for sharp thresholds of graph properties, and
the k-SAT problem. J. American Mathematical Society, 12, 1017–1054.
11. Achlioptas, D. and Peres, Y. (2004). The threshold for random k-SAT is 2k log 2−o(k). J. American Mathematical Society, 17(4), 947–973.
12. Achlioptas, D., Naor, A., and Peres, Y. (2007). On the maximum satisfiability of random formulas.
JACM, 54(2).
13. Coarfa, C., Demopoulos, D., Aguirre, A., Subramanian, D., and Yardi, M. (2003). Random 3-SAT: The plot thickens. Constraints, 8(3), 243–261.
14. Parisi, M. M. G. and Zecchina, R. (2002). Analytic and algorithmic solution of random satisfiability problems. Science, 297, 812–815.
15. Maneva, E., Mossel, E., and Wainwright, M. J. (2007). A new look at survey propagation and its generalizations. JACM, 54(4).
16. McCulloch, W. S. and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity.
Bulletin of Mathematical Biophysics, 5, 115–137.

Russell I
B. Russell/A.N. Whitehead
Principia Mathematica Frankfurt 1986

Russell II
B. Russell
The ABC of Relativity, London 1958, 1969
German Edition:
Das ABC der Relativitätstheorie Frankfurt 1989

Russell IV
B. Russell
The Problems of Philosophy, Oxford 1912
German Edition:
Probleme der Philosophie Frankfurt 1967

Russell VI
B. Russell
"The Philosophy of Logical Atomism", in: B. Russell, Logic and KNowledge, ed. R. Ch. Marsh, London 1956, pp. 200-202
German Edition:
Die Philosophie des logischen Atomismus
In
Eigennamen, U. Wolf (Hg) Frankfurt 1993

Russell VII
B. Russell
On the Nature of Truth and Falsehood, in: B. Russell, The Problems of Philosophy, Oxford 1912 - Dt. "Wahrheit und Falschheit"
In
Wahrheitstheorien, G. Skirbekk (Hg) Frankfurt 1996


Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010
Terminology Norvig Norvig I 8
Terminology/Russell/Norvig: Although decidability and computability are important to an understanding of computation, the notion of tractability has had an even greater impact. Roughly speaking, a problem is called intractable if the time required to solve instances of the problem grows exponentially with the size of the instances. The distinction between polynomial and exponential growth in complexity was first emphasized in the mid-1960s (Cobham, 1964(1); Edmonds, 1965(2)). It is important because exponential growth means that even moderately large instances cannot be solved in any reasonable time.
Norvig I 106
Pattern databases: The idea behind them is to store these exact solution costs for every possible subproblem instance (…)
Norvig I 108
Def Problem: A problem consists of five parts: the initial state, a set of actions, a transition model describing the results of those actions, a goal test function, and a path cost function. The environment of the problem is represented by a state space. A path through the state space from the initial state to a goal state is a solution.
Norvig I 135
And/or nodes/search trees: or: In a deterministic environment, the only branching is introduced by the agent’s own choices in each state. We call these nodes OR nodes And: In a nondeterministic environment, branching is also introduced by the environment’s choice of outcome for each action. We call these nodes AND nodes. A solution for an AND–OR search problem is a subtree that (1) has a goal node at every leaf, (2) specifies one action at each of its OR nodes, and (3) includes every outcome branch at each of its AND nodes.
Norvig I 148
Competitive ratio: Typically, the agent’s objective is to reach a goal state while minimizing cost. (Another possible objective is simply to explore the entire environment.) The cost is the total path cost of the path that the agent actually travels. It is common to compare this cost with the path cost of the path the agent would follow if it knew the search space in advance—that is, the actual shortest path (or shortest complete exploration). In the language of online algorithms, this is called the competitive ratio; we would like it to be as small as possible. >Online search/Norvig.
Norvig I 162
Def Pruning: Pruning allows us to ignore portions of the search tree that make no difference to the final choice. Def Heuristic evaluation functions: allow us to approximate the true utility of a state without doing a complete search.
Def utility function: (also called an objective function or payoff function), defines the final numeric value for a game that ends in terminal state s for a player p. In chess, the outcome is a win, loss, or draw, with values +1, 0, or 1 2 . Some games have a wider variety of possible outcomes; the payoffs in backgammon range from 0 to +192.
Def Zero-sum game: is (confusingly) defined as one where the total payoff to all players is the same for every instance of the game. Chess is zero-sum. “Constant-sum” would have been a better term, but zero-sum is traditional and makes sense if you imagine each player is charged an entry fee of 1/2 .
Norvig I 165
Minimax Algorithm/gaming: The minimax algorithm (…) computes the minimax decision from the current state. It uses a simple recursive computation of the minimax values of each successor state, directly implementing the defining equations. The recursion proceeds all the way down to the leaves of the tree, and then the minimax values are backed up through the tree as the recursion unwinds. The minimax algorithm performs a complete depth-first exploration of the game tree. For real games, of course, the time cost is totally impractical, but this algorithm serves as the basis for the mathematical analysis of games and for more practical algorithms.
Norvig I 208
Def node consistency: A single variable (corresponding to a node in the CSP network) is node-consistent if all the values in the variable’s domain satisfy the variable’s unary constraints. Def arc consistency: A variable in a CSP is arc-consistent if every value in its domain satisfies the variable’s binary constraints. More formally, Xi is arc-consistent with respect to another variable Xj if for every value in the current domain Di there is some value in the domain Dj that satisfies the binary constraint on the arc (Xi,Xj). >Constraint Satisfaction Problems/CSP/Norvig.
Norvig I 210
Def Path consistency: Arc consistency tightens down the domains (unary constraints) using the arcs (binary constraints). To make progress on problems like map coloring, we need a stronger notion of consistency. Path consistency tightens the binary constraints by using implicit constraints that are inferred by looking at triples of variables.
Norvig I 211
Def K-consistency: Stronger forms of propagation can K-CONSISTENCY be defined with the notion of k-consistency. A CSP is k-consistent if, for any set of k − 1 variables and for any consistent assignment to those variables, a consistent value can always be assigned to any kth variable. For forward chaining, backward chaining: see >Software agents/Norvig.
Norvig I 266
Propositions: The idea of associating propositions with time steps extends to any aspect of the world changes over time. Fluent: We use the word fluent (from the Latin fluents, flowing) to refer an aspect of the world that changes. “Fluent” is a synonym for “state variable”.

Norvig I 346
Skolemize: Skolemization is the process of removing existential quantifiers by elimination. In the simple case, it is just like the Existential Instantiation rule (…): translate ∃x P(x) into P(A), where A is a new constant.
Norvig I 410
Nondeterministic action: The programming languages community has coined the term demonic nondeterminism for the case where an adversary makes the DEMONIC choices, contrasting this with angelic nonde-
Norvig I 411
terminism, where the agent itself makes the choices. We borrow this term to define angelic semantics for HLA descriptions.
Norvig I 468
Closed-world assumption: as implemented in logic programs, provides a simple way to avoid having to specify lots of negative information. It is best interpreted as a default that can be overridden by additional information.

1. Cobham, A. (1964). The intrinsic computational difficulty of functions. In Proc. 1964 International
Congress for Logic, Methodology, and Philosophy of Science, pp. 24–30.
2. Edmonds, J. (1965). Paths, trees, and flowers. Canadian Journal of Mathematics, 17, 449–467.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Terminology Russell Norvig I 8
Terminology/Russell/Norvig: Although decidability and computability are important to an understanding of computation, the notion of tractability has had an even greater impact. Roughly speaking, a problem is called intractable if the time required to solve instances of the problem grows exponentially with the size of the instances. The distinction between polynomial and exponential growth in complexity was first emphasized in the mid-1960s (Cobham, 1964(1); Edmonds, 1965(2)). It is important because exponential growth means that even moderately large instances cannot be solved in any reasonable time.
Norvig I 106
Pattern databases: The idea behind them is to store these exact solution costs for every possible subproblem instance (…)
Norvig I 108
Def Problem: A problem consists of five parts: the initial state, a set of actions, a transition model describing the results of those actions, a goal test function, and a path cost function. The environment of the problem is represented by a state space. A path through the state space from the initial state to a goal state is a solution.
Norvig I 135
And/or nodes/search trees: or: In a deterministic environment, the only branching is introduced by the agent’s own choices in each state. We call these nodes OR nodes And: In a nondeterministic environment, branching is also introduced by the environment’s choice of outcome for each action. We call these nodes AND nodes. A solution for an AND–OR search problem is a subtree that
(1) has a goal node at every leaf,
(2) specifies one action at each of its OR nodes, and
(3) includes every outcome branch at each of its AND nodes.
Norvig I 148
Competitive ratio: Typically, the agent’s objective is to reach a goal state while minimizing cost. (Another possible objective is simply to explore the entire environment.) The cost is the total path cost of the path that the agent actually travels. It is common to compare this cost with the path cost of the path the agent would follow if it knew the search space in advance—that is, the actual shortest path (or shortest complete exploration). In the language of online algorithms, this is called the competitive ratio; we would like it to be as small as possible. >Online search/Norvig.
Norvig I 162
Def Pruning: Pruning allows us to ignore portions of the search tree that make no difference to the final choice. Def Heuristic evaluation functions: allow us to approximate the true utility of a state without doing a complete search.
Def utility function: (also called an objective function or payoff function), defines the final numeric value for a game that ends in terminal state s for a player p. In chess, the outcome is a win, loss, or draw, with values +1, 0, or 1 2 . Some games have a wider variety of possible outcomes; the payoffs in backgammon range from 0 to +192.
Def Zero-sum game: is (confusingly) defined as one where the total payoff to all players is the same for every instance of the game. Chess is zero-sum. “Constant-sum” would have been a better term, but zero-sum is traditional and makes sense if you imagine each player is charged an entry fee of 1/2 .
Norvig I 165
Minimax Algorithm/gaming: The minimax algorithm (…) computes the minimax decision from the current state. It uses a simple recursive computation of the minimax values of each successor state, directly implementing the defining equations. The recursion proceeds all the way down to the leaves of the tree, and then the minimax values are backed up through the tree as the recursion unwinds. The minimax algorithm performs a complete depth-first exploration of the game tree. For real games, of course, the time cost is totally impractical, but this algorithm serves as the basis for the mathematical analysis of games and for more practical algorithms.
Norvig I 208
Def node consistency: A single variable (corresponding to a node in the CSP network) is node-consistent if all the values in the variable’s domain satisfy the variable’s unary constraints. Def arc consistency: A variable in a CSP is arc-consistent if every value in its domain satisfies the variable’s binary constraints. More formally, Xi is arc-consistent with respect to another variable Xj if for every value in the current domain Di there is some value in the domain Dj that satisfies the binary constraint on the arc (Xi,Xj). >Constraint Satisfaction Problems/CSP/Norvig.
Norvig I 210
Def Path consistency: Arc consistency tightens down the domains (unary constraints) using the arcs (binary constraints). To make progress on problems like map coloring, we need a stronger notion of consistency. Path consistency tightens the binary constraints by using implicit constraints that are inferred by looking at triples of variables.
Norvig I 211
Def K-consistency: Stronger forms of propagation can K-CONSISTENCY be defined with the notion of k-consistency. A CSP is k-consistent if, for any set of k − 1 variables and for any consistent assignment to those variables, a consistent value can always be assigned to any kth variable. For forward chaining, backward chaining: see >Software agents/Norvig.
Norvig I 266
Propositions: The idea of associating propositions with time steps extends to any aspect of the world changes over time. Fluent: We use the word fluent (from the Latin fluents, flowing) to refer an aspect of the world that changes. “Fluent” is a synonym for “state variable”.

Norvig I 346
Skolemize: Skolemization is the process of removing existential quantifiers by elimination. In the simple case, it is just like the Existential Instantiation rule (…): translate ∃x P(x) into P(A), where A is a new constant.
Norvig I 410
Nondeterministic action: The programming languages community has coined the term demonic nondeterminism for the case where an adversary makes the DEMONIC choices, contrasting this with angelic nonde-
Norvig I 411
terminism, where the agent itself makes the choices. We borrow this term to define angelic semantics for HLA descriptions.
Norvig I 468
Closed-world assumption: as implemented in logic programs, provides a simple way to avoid having to specify lots of negative information. It is best interpreted as a default that can be overridden by additional information.
1. Cobham, A. (1964). The intrinsic computational difficulty of functions. In Proc. 1964 International
Congress for Logic, Methodology, and Philosophy of Science, pp. 24–30.
2. Edmonds, J. (1965). Paths, trees, and flowers. Canadian Journal of Mathematics, 17, 449–467.

Russell I
B. Russell/A.N. Whitehead
Principia Mathematica Frankfurt 1986

Russell II
B. Russell
The ABC of Relativity, London 1958, 1969
German Edition:
Das ABC der Relativitätstheorie Frankfurt 1989

Russell IV
B. Russell
The Problems of Philosophy, Oxford 1912
German Edition:
Probleme der Philosophie Frankfurt 1967

Russell VI
B. Russell
"The Philosophy of Logical Atomism", in: B. Russell, Logic and KNowledge, ed. R. Ch. Marsh, London 1956, pp. 200-202
German Edition:
Die Philosophie des logischen Atomismus
In
Eigennamen, U. Wolf (Hg) Frankfurt 1993

Russell VII
B. Russell
On the Nature of Truth and Falsehood, in: B. Russell, The Problems of Philosophy, Oxford 1912 - Dt. "Wahrheit und Falschheit"
In
Wahrheitstheorien, G. Skirbekk (Hg) Frankfurt 1996


Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010


No results. Please choose an author or concept or try a different keyword-search.