Disputed term/author/ism | Author |
Entry |
Reference |
---|---|---|---|
Decidability | Tarski | Berka I 543ff Undecidability/Gödel/Tarski: an undecidable statement is decidable in an enriched metascience. Cf. >Metalanguage, >Expressivity, >Semantic closure. Definability/Tarski: for every deductive science, which includes arithmetic,we can specify arithmetical terms that are not definable in it. Cf. >Ideology/Quine, >Ontology/Quine. I 545 But with methods that are used here in analogy, you can show that these terms can be defined on the basis of the considered science when enriched by variables of a higher order.(1) 1. A.Tarski, Der Wahrheitsbegriff in den formalisierten Sprachen, Commentarii Societatis philosophicae Polonorum. Vol 1, Lemberg 1935 |
Tarski I A. Tarski Logic, Semantics, Metamathematics: Papers from 1923-38 Indianapolis 1983 Berka I Karel Berka Lothar Kreiser Logik Texte Berlin 1983 |
Environment | AI Research | Norvig I 401 Environment/planning/real world/representation/artificial intelligence/Norvig/Russell: algorithms for planning (…) extend both the representation language and the way the planner interacts with the environment. >Planning/Norvig, >Agents/Norvig. New: [we now have] a) actions with duration and b) plans that are organized hierarchically. Hierarchy: Hierarchy also lends itself to efficient plan construction because the planner can solve a problem at an abstract level before delving into details 1st approach: “plan first, schedule later”: (…) we divide the overall problem into a planning phase in which actions are selected, with some ordering constraints, to meet the goals of the problem, and a later scheduling phase, in which temporal information is added to the plan to ensure that it meets resource and deadline constraints. Norvig I 404 Critical path: Mathematically speaking, critical-path problems are easy to solve because they are defined as a conjunction of linear inequalities on the start and end times. When we introduce resource constraints, the resulting constraints on start and end times become more complicated. Norvig I 405 Scheduling: The “cannot overlap” constraint is a disjunction of two linear inequalities, one for each possible ordering. The introduction of disjunctions turns out to make scheduling with resource constraints NP-hard. >NP-Problems. Non-overlapping: [when we assume non-overlapping] every scheduling problem can be solved by a non-overlapping sequence that avoids all resource conflicts, provided that each action is feasible by itself. If a scheduling problem is proving very difficult, however, it may not be a good idea to solve it this way - it may be better to reconsider the actions and constraints, in case that leads to a much easier scheduling problem. Thus, it makes sense to integrate planning and scheduling by taking into account durations and overlaps during the construction of a partial-order plan. Heuristics: partial-order planners can detect resource constraint violations in much the same way they detect conflicts with causal links. Heuristics can be devised to estimate the total completion time of a plan. This is currently an active area of research (see below). Norvig I 406 Real world planning: AI systems will probably have to do what humans appear to do: plan at higher levels of abstraction. A reasonable plan for the Hawaii vacation might be “Go to San Francisco airport (…)” ((s) which might be in a different direction). (…) planning can occur both before and during the execution of the plan (…). Solution: hierarchical decomposition: hierarchical task networks (HTN). Norvig I 407 a high-level plan achieves the goal from a given state if at least one of its implementations achieves the goal from that state. The “at least one” in this definition is crucial - not all implementations need to achieve the goal, because the agent gets Norvig I 408 to decide which implementation it will execute. Thus, the set of possible implementations in HTN planning - each of which may have a different outcome - is not the same as the set of possible outcomes in nondeterministic planning. It can be shown that the right collection of HLAs can result in the time complexity of blind search dropping from exponential in the solution depth to linear in the solution depth, although devising such a collection of HLAs may be a nontrivial task in itself. Norvig I 409 Plan library: The key to HTN planning, then, is the construction of a plan library containing known methods for implementing complex, high-level actions. One method of constructing the library is to learn the methods from problem-solving experience. (>Representation/AI research, >Learning/AI research). Learning/AI: In this way, the agent can become more and more competent over time as new methods are built on top of old methods. One important aspect of this learning process is the ability to generalize the methods that are constructed, eliminating detail that is specific to the problem instance (…). Norvig I 410 Nondeterministic action: problem: downward refinement is much too conservative for a real world environment. See >Terminology/Norvig for “demonic nondeterminism” and “angelic nondeterminism”. Norvig I 411 Reachable sets: The key idea is that the agent can choose which element of the reachable set it ends up in when it executes the HLA; thus, an HLA with multiple refinements is more “powerful” than the same HLA (hig level action) with fewer refinements. The notion of reachable sets yields a straightforward algorithm: search among highlevel plans, looking for one whose reachable set intersects the goal; once that happens, the algorithm can commit to that abstract plan, knowing that it works, and focus on refining the plan further. Norvig I 415 Unknown environment/planning/nondeterministic domains: [problems here are] sensorless planning (also known as conformant planning) for environments with no observations; contingency planning for partially observable and nondeterministic environments; and online planning and replanning for unknown environments. Norvig I 417 Sensorless planning: In classical planning, where the closed-world assumption is made, we would assume that any fluent not mentioned in a state is false, but in sensorless (and partially observable) planning we have to switch to an open-world assumption in which states contain both positive and negative fluents, and if a fluent does not appear, its value is unknown. Thus, the belief state corresponds exactly to the set of possible worlds that satisfy the formula. Norvig I 423 Online replanning: The online agent has a choice of how carefully to monitor the environment. We distinguish three levels: a) Action monitoring: before executing an action, the agent verifies that all the preconditions still hold, b) Plan monitoring: before executing an action, the agent verifies that the remaining plan will still succeed, c) Goal monitoring: before executing an action, the agent checks to see if there is a better set of goals it could be trying to achieve. Norvig I 425 Multi-agent planning: A multibody problem is still a “standard” single-agent problem as long as the relevant sensor information collected by each body can be pooled - either centrally or within each body - to form a common estimate of the world state that then informs the execution of the overall plan; in this case, the multiple bodies act as a single body. When communication constraints make this impossible, we have Norvig I 426 what is sometimes called a decentralized planning problem: (…) the subplan constructed for each body may need to include explicit communicative actions with other bodies. Norvig I 429 Convention: A convention is any constraint on the selection of joint plans. Communication: In the absence of a convention, agents can use communication to achieve common knowledge of a feasible joint plan. Plan recognition: works when a single action (or short sequence of actions) is enough to determine a joint plan unambiguously. Note that communication can work as well with competitive agents as with cooperative ones. Norvig I 430 The most difficult multi-agent problems involve both cooperation with members of one’s own team and competition against members of opposing teams, all without centralized control. Norvig I 431 Time constraints in plans: Planning with time constraints was first dealt with by DEVISER (Vere, 1983(1)). The representation of time in plans was addressed by Allen (1984(2)) and by Dean et al. (1990)(3) in the FORBIN system. NONLIN+ (Tate and Whiter, 1984)(4) and SIPE (Wilkins, 1988(5), 1990(6)) could reason about the allocation of limited resources to various plan steps. Forward state-space search: The two planners SAPA (Do and Kambhampati, 2001)(7) and T4 (Haslum and Geffner, 2001)(8) both used forward state-space search with sophisticated heuristics to handle actions with durations and resources. Human heuristics: An alternative is to use very expressive action languages, but guide them by human-written domain-specific heuristics, as is done by ASPEN (Fukunaga et al., 1997)(9), HSTS (Jonsson et al., 2000)(10), and IxTeT (Ghallab and Laruelle, 1994)(11). Norvig I 432 Hybrid planning-and-scheduling systems: ISIS (Fox et al., 1982(12); Fox, 1990(13)) has been used for job shop scheduling at Westinghouse, GARI (Descotte and Latombe, 1985)(14) planned the machining and construction of mechanical parts, FORBIN was used for factory control, and NONLIN+ was used for naval logistics planning. We chose to present planning and scheduling as two separate problems; (Cushing et al., 2007)(15) show that this can lead to incompleteness on certain problems. Scheduling: The literature on scheduling is presented in a classic survey article (Lawler et al., 1993)(16), a recent book (Pinedo, 2008)(17), and an edited handbook (Blazewicz et al., 2007)(18). Abstraction hierarchy: The ABSTRIPS system (Sacerdoti, 1974)(19) introduced the idea of an abstraction hierarchy, whereby planning at higher levels was permitted to ignore lower-level preconditions of actions in order to derive the general structure of a working plan. Austin Tate’s Ph.D. thesis (1975b) and work by Earl Sacerdoti (1977)(20) developed the basic ideas of HTN planning in its modern form. Many practical planners, including O-PLAN and SIPE, are HTN planners. Yang (1990)(21) discusses properties of actions that make HTN planning efficient. Erol, Hendler, and Nau (1994(22), 1996(23)) present a complete hierarchical decomposition planner as well as a range of complexity results for pure HTN planners. Our presentation of HLAs and angelic semantics is due to Marthi et al. (2007(24), 2008(25)). Kambhampati et al. (1998)(26) have proposed an approach in which decompositions are just another form of plan refinement, similar to the refinements for non-hierarchical partial-order planning. Explanation-based learning: The technique of explanation-based learning (…) has been applied in several systems as a means of generalizing previously computed plans, including SOAR (Laird et al., 1986)(27) and PRODIGY (Carbonell et al., 1989)(28). Case-based planning: An alternative approach is to store previously computed plans in their original form and then reuse them to solve new, similar problems by analogy to the original problem. This is the approach taken by the field called case-based planning (Carbonell, 1983(29); Alterman, 1988(30); Hammond, 1989(31)). Kambhampati (1994)(32) argues that case-based planning should be analyzed as a form of refinement planning and provides a formal foundation for case-based partial-order planning. Norvig I 433 Conformant planning: Goldman and Boddy (1996)(33) introduced the term conformant planning, noting that sensorless plans are often effective even if the agent has sensors. The first moderately efficient conformant planner was Smith and Weld’s (1998)(34) Conformant Graphplan or CGP. Ferraris and Giunchiglia (2000)(35) and Rintanen (1999)(36) independently developed SATPLAN-based conformant planners. Bonet and Geffner (2000)(37) describe a conformant planner based on heuristic search in the space of >belief states (…). Norvig I 434 Reactive planning: In the mid-1980s, pessimism about the slow run times of planning systems led to the proposal of reflex agents called reactive planning systems (Brooks, 1986(38); Agre and Chapman, 1987)(39). PENGI (Agre and Chapman, 1987)(39) could play a (fully observable) video game by using Boolean circuits combined with a “visual” representation of current goals and the agent’s internal state. Policies: “Universal plans” (Schoppers, 1987(40), 1989(41)) were developed as a lookup table method for reactive planning, but turned out to be a rediscovery of the idea of policies that had long been used in Markov decision processes (…). >Open Universe/AI research). 1. Vere, S. A. (1983). Planning in time: Windows and durations for activities and goals. PAMI, 5, 246-267. 2. Allen, J. F. (1984). Towards a general theory of action and time. AIJ, 23, 123-154. 3. Dean, T., Kanazawa, K., and Shewchuk, J. (1990). Prediction, observation and estimation in planning and control. In 5th IEEE International Symposium on Intelligent Control, Vol. 2, pp. 645-650. 4. Tate, A. and Whiter, A. M. (1984). Planning with multiple resource constraints and an application to a naval planning problem. In Proc. First Conference on AI Applications, pp. 410-416. 5. Wilkins, D. E. (1988). Practical Planning: Extending the AI Planning Paradigm. Morgan Kaufmann. 6. Wilkins, D. E. (1990). Can AI planners solve practical problems? Computational Intelligence, 6(4), 232-246. 7. Do, M. B. and Kambhampati, S. (2003). Planning as constraint satisfaction: solving the planning graph by compiling it into CSP. AIJ, 132(2), 151-182. 8. Haslum, P. and Geffner, H. (2001). Heuristic planning with time and resources. In Proc. IJCAI-01 Workshop on Planning with Resources. 9. Fukunaga, A. S., Rabideau, G., Chien, S., and Yan, D. (1997). ASPEN: A framework for automated planning and scheduling of spacecraft control and operations. In Proc. International Symposium on AI, Robotics and Automation in Space, pp. 181-187. 10. Jonsson, A., Morris, P., Muscettola, N., Rajan, K., and Smith, B. (2000). Planning in interplanetary space: Theory and practice. In AIPS-00, pp. 177-186. 11. Ghallab, M. and Laruelle, H. (1994). Representation and control in IxTeT, a temporal planner. In AIPS-94, pp. 61-67. 12. Fox, M. S., Allen, B., and Strohm, G. (1982). Job shop scheduling: An investigation in constraint directed reasoning. In AAAI-82, pp. 155-158. 13. Fox, M. S. (1990). Constraint-guided scheduling: A short history of research at CMU. Computers in Industry, 14(1–3), 79-88 14. Descotte, Y. and Latombe, J.-C. (1985). Making compromises among antagonist constraints in a planner. AIJ, 27, 183–217. 15. Cushing,W., Kambhampati, S.,Mausam, and Weld, D. S. (2007). When is temporal planning really temporal? In IJCAI-07. 16. Lawler, E. L., Lenstra, J. K., Kan, A., and Shmoys, D. B. (1993). Sequencing and scheduling: Algorithms and complexity. In Graves, S. C., Zipkin, P. H., and Kan, A. H. G. R. (Eds.), Logistics of Production and Inventory: Handbooks in Operations Research and Management Science, Volume 4, pp. 445 - 522. North-Holland. 17. Pinedo, M. (2008). Scheduling: Theory, Algorithms, and Systems. Springer Verlag. 18. Blazewicz, J., Ecker, K., Pesch, E., Schmidt, G., and Weglarz, J. (2007). Handbook on Scheduling: Models and Methods for Advanced Planning (International Handbooks on Information Systems). Springer-Verlag New York, Inc. 19. Sacerdoti, E. D. (1974). Planning in a hierarchy of abstraction spaces. AIJ, 5(2), 115–135. 20. Sacerdoti, E. D. (1977). A Structure for Plans and Behavior. Elsevier/North-Holland 21. Yang, Q. (1990). Formalizing planning knowledge for hierarchical planning. Computational Intelligence, 6, 12–24. 22. Erol, K., Hendler, J., and Nau, D. S. (1994). HTN planning: Complexity and expressivity. In AAAI-94, pp. 1123–1128. 23. Erol, K., Hendler, J., and Nau, D. S. (1996). Complexity results for HTN planning. AIJ, 18(1), 69–93. 24. Marthi, B., Russell, S. J., and Wolfe, J. (2007). Angelic semantics for high-level actions. In ICAPS-07. 25. Marthi, B., Russell, S. J., and Wolfe, J. (2008). Angelic hierarchical planning: Optimal and online algorithms. In ICAPS-08. 26. Kambhampati, S., Mali, A. D., and Srivastava, B. (1998). Hybrid planning for partially hierarchical domains. In AAAI-98, pp. 882–888. 27. Laird, J., Rosenbloom, P. S., and Newell, A. (1986). Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1, 11–46. 28. Carbonell, J. G., Knoblock, C. A., and Minton, S. (1989). PRODIGY: An integrated architecture for planning and learning. Technical report CMU-CS- 89-189, Computer Science Department, Carnegie- Mellon University. 29. Carbonell, J. G. (1983). Derivational analogy and its role in problem solving. In AAAI-83, pp. 64–69. 30. Alterman, R. (1988). Adaptive planning. Cognitive Science, 12, 393–422. 31. Hammond, K. (1989). Case-Based Planning: Viewing Planning as a Memory Task. Academic Press. 32. Kambhampati, S. (1994). Exploiting causal structure to control retrieval and refitting during plan reuse. Computational Intelligence, 10, 213–244 33. Goldman, R. and Boddy, M. (1996). Expressive planning and explicit knowledge. In AIPS-96, pp. 110–117. 34. Goldman, R. and Boddy, M. (1996). Expressive planning and explicit knowledge. In AIPS-96, pp. 110–117. 35. Smith, D. E. and Weld, D. S. (1998). Conformant Graphplan. In AAAI-98, pp. 889–896. 36. Rintanen, J. (1999). Improvements to the evaluation of quantified Boolean formulae. In IJCAI-99, pp. 1192–1197. 37. Bonet, B. and Geffner, H. (2005). An algorithm better than AO∗? In AAAI-05. 38. Brooks, R. A. (1986). A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, 2, 14–23. 39. Agre, P. E. and Chapman, D. (1987). Pengi: an implementation of a theory of activity. In IJCAI-87, pp. 268–272. 40. Schoppers, M. J. (1987). Universal plans for reactive robots in unpredictable environments. In IJCAI- 87, pp. 1039–1046. 41. Schoppers, M. J. (1989). In defense of reaction plans as caches. AIMag, 10(4), 51–60. |
Norvig I Peter Norvig Stuart J. Russell Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010 |
Environment | Norvig | Norvig I 401 Environment/planning/real world/representation/artificial intelligence/Norvig/Russell: algorithms for planning (…) extend both the representation language and the way the planner interacts with the environment. >Planning/Norvig, >Agents/Norvig. New: [we now have] a) actions with duration and b) plans that are organized hierarchically. Hierarchy: Hierarchy also lends itself to efficient plan construction because the planner can solve a problem at an abstract level before delving into details 1st approach: “plan first, schedule later”: (…) we divide the overall problem into a planning phase in which actions are selected, with some ordering constraints, to meet the goals of the problem, and a later scheduling phase, in which temporal information is added to the plan to ensure that it meets resource and deadline constraints. Norvig I 404 Critical path: Mathematically speaking, critical-path problems are easy to solve because they are defined as a conjunction of linear inequalities on the start and end times. When we introduce resource constraints, the resulting constraints on start and end times become more complicated. Norvig I 405 Scheduling: The “cannot overlap” constraint is a disjunction of two linear inequalities, one for each possible ordering. The introduction of disjunctions turns out to make scheduling with resource constraints NP-hard. >NP-Problems. Non-overlapping: [when we assume non-overlapping] every scheduling problem can be solved by a non-overlapping sequence that avoids all resource conflicts, provided that each action is feasible by itself. If a scheduling problem is proving very difficult, however, it may not be a good idea to solve it this way - it may be better to reconsider the actions and constraints, in case that leads to a much easier scheduling problem. Thus, it makes sense to integrate planning and scheduling by taking into account durations and overlaps during the construction of a partial-order plan. Heuristics: partial-order planners can detect resource constraint violations in much the same way they detect conflicts with causal links. Heuristics can be devised to estimate the total completion time of a plan. This is currently an active area of research (see below). Norvig I 406 Real world planning: AI systems will probably have to do what humans appear to do: plan at higher levels of abstraction. A reasonable plan for the Hawaii vacation might be “Go to San Francisco airport (…)” ((s) which might be in a different direction). (…) planning can occur both before and during the execution of the plan (…). Solution: hierarchical decomposition: hierarchical task networks (HTN). Norvig I 407 a high-level plan achieves the goal from a given state if at least one of its implementations achieves the goal from that state. The “at least one” in this definition is crucial - not all implementations need to achieve the goal, because the agent gets Norvig I 408 to decide which implementation it will execute. Thus, the set of possible implementations in HTN planning - each of which may have a different outcome - is not the same as the set of possible outcomes in nondeterministic planning. It can be shown that the right collection of HLAs can result in the time complexity of blind search dropping from exponential in the solution depth to linear in the solution depth, although devising such a collection of HLAs may be a nontrivial task in itself. Norvig I 409 Plan library: The key to HTN planning, then, is the construction of a plan library containing known methods for implementing complex, high-level actions. One method of constructing the library is to learn the methods from problem-solving experience. (>Representation/AI research, >Learning/AI research). Learning/AI: In this way, the agent can become more and more competent over time as new methods are built on top of old methods. One important aspect of this learning process is the ability to generalize the methods that are constructed, eliminating detail that is specific to the problem instance (…). Norvig I 410 Nondeterministic action: problem: downward refinement is much too conservative for a real world environment. See >Terminology/Norvig for “demonic nondeterminism” and “angelic nondeterminism”. Norvig I 411 Reachable sets: The key idea is that the agent can choose which element of the reachable set it ends up in when it executes the HLA; thus, an HLA with multiple refinements is more “powerful” than the same HLA (hig level action) with fewer refinements. The notion of reachable sets yields a straightforward algorithm: search among highlevel plans, looking for one whose reachable set intersects the goal; once that happens, the algorithm can commit to that abstract plan, knowing that it works, and focus on refining the plan further. Norvig I 415 Unknown environment/planning/nondeterministic domains: [problems here are] sensorless planning (also known as conformant planning) for environments with no observations; contingency planning for partially observable and nondeterministic environments; and online planning and replanning for unknown environments. Norvig I 417 Sensorless planning: In classical planning, where the closed-world assumption is made, we would assume that any fluent not mentioned in a state is false, but in sensorless (and partially observable) planning we have to switch to an open-world assumption in which states contain both positive and negative fluents, and if a fluent does not appear, its value is unknown. Thus, the belief state corresponds exactly to the set of possible worlds that satisfy the formula. Norvig I 423 Online replanning: The online agent has a choice of how carefully to monitor the environment. We distinguish three levels: a) Action monitoring: before executing an action, the agent verifies that all the preconditions still hold, b) Plan monitoring: before executing an action, the agent verifies that the remaining plan will still succeed, c) Goal monitoring: before executing an action, the agent checks to see if there is a better set of goals it could be trying to achieve. Norvig I 425 Multi-agent planning: A multibody problem is still a “standard” single-agent problem as long as the relevant sensor information collected by each body can be pooled - either centrally or within each body - to form a common estimate of the world state that then informs the execution of the overall plan; in this case, the multiple bodies act as a single body. When communication constraints make this impossible, we have Norvig I 426 what is sometimes called a decentralized planning problem: (…) the subplan constructed for each body may need to include explicit communicative actions with other bodies. Norvig I 429 Convention: A convention is any constraint on the selection of joint plans. Communication: In the absence of a convention, agents can use communication to achieve common knowledge of a feasible joint plan. Plan recognition: works when a single action (or short sequence of actions) is enough to determine a joint plan unambiguously. Note that communication can work as well with competitive agents as with cooperative ones. Norvig I 430 The most difficult multi-agent problems involve both cooperation with members of one’s own team and competition against members of opposing teams, all without centralized control. Norvig I 431 Time constraints in plans: Planning with time constraints was first dealt with by DEVISER (Vere, 1983(1)). The representation of time in plans was addressed by Allen (1984(2)) and by Dean et al. (1990)(3) in the FORBIN system. NONLIN+ (Tate and Whiter, 1984)(4) and SIPE (Wilkins, 1988(5), 1990(6)) could reason about the allocation of limited resources to various plan steps. Forward state-space search: The two planners SAPA (Do and Kambhampati, 2001)(7) and T4 (Haslum and Geffner, 2001)(8) both used forward state-space search with sophisticated heuristics to handle actions with durations and resources. Human heuristics: An alternative is to use very expressive action languages, but guide them by human-written domain-specific heuristics, as is done by ASPEN (Fukunaga et al., 1997)(9), HSTS (Jonsson et al., 2000)(10), and IxTeT (Ghallab and Laruelle, 1994)(11). Norvig I 432 Hybrid planning-and-scheduling systems: ISIS (Fox et al., 1982(12); Fox, 1990(13)) has been used for job shop scheduling at Westinghouse, GARI (Descotte and Latombe, 1985)(14) planned the machining and construction of mechanical parts, FORBIN was used for factory control, and NONLIN+ was used for naval logistics planning. We chose to present planning and scheduling as two separate problems; (Cushing et al., 2007)(15) show that this can lead to incompleteness on certain problems. Scheduling: The literature on scheduling is presented in a classic survey article (Lawler et al., 1993)(16), a recent book (Pinedo, 2008)(17), and an edited handbook (Blazewicz et al., 2007)(18). Abstraction hierarchy: The ABSTRIPS system (Sacerdoti, 1974)(19) introduced the idea of an abstraction hierarchy, whereby planning at higher levels was permitted to ignore lower-level preconditions of actions in order to derive the general structure of a working plan. Austin Tate’s Ph.D. thesis (1975b) and work by Earl Sacerdoti (1977)(20) developed the basic ideas of HTN planning in its modern form. Many practical planners, including O-PLAN and SIPE, are HTN planners. Yang (1990)(21) discusses properties of actions that make HTN planning efficient. Erol, Hendler, and Nau (1994(22), 1996(23)) present a complete hierarchical decomposition planner as well as a range of complexity results for pure HTN planners. Our presentation of HLAs and angelic semantics is due to Marthi et al. (2007(24), 2008(25)). Kambhampati et al. (1998)(26) have proposed an approach in which decompositions are just another form of plan refinement, similar to the refinements for non-hierarchical partial-order planning. Explanation-based learning: The technique of explanation-based learning (…) has been applied in several systems as a means of generalizing previously computed plans, including SOAR (Laird et al., 1986)(27) and PRODIGY (Carbonell et al., 1989)(28). Case-based planning: An alternative approach is to store previously computed plans in their original form and then reuse them to solve new, similar problems by analogy to the original problem. This is the approach taken by the field called case-based planning (Carbonell, 1983(29); Alterman, 1988(30); Hammond, 1989(31)). Kambhampati (1994)(32) argues that case-based planning should be analyzed as a form of refinement planning and provides a formal foundation for case-based partial-order planning. Norvig I 433 Conformant planning: Goldman and Boddy (1996)(33) introduced the term conformant planning, noting that sensorless plans are often effective even if the agent has sensors. The first moderately efficient conformant planner was Smith and Weld’s (1998)(34) Conformant Graphplan or CGP. Ferraris and Giunchiglia (2000)(35) and Rintanen (1999)(36) independently developed SATPLAN-based conformant planners. Bonet and Geffner (2000)(37) describe a conformant planner based on heuristic search in the space of >belief states (…). Norvig I 434 Reactive planning: In the mid-1980s, pessimism about the slow run times of planning systems led to the proposal of reflex agents called reactive planning systems (Brooks, 1986(38); Agre and Chapman, 1987)(39). PENGI (Agre and Chapman, 1987)(39) could play a (fully observable) video game by using Boolean circuits combined with a “visual” representation of current goals and the agent’s internal state. Policies: “Universal plans” (Schoppers, 1987(40), 1989(41)) were developed as a lookup table method for reactive planning, but turned out to be a rediscovery of the idea of policies that had long been used in Markov decision processes (…). >Open Universe/AI research). 1. Vere, S. A. (1983). Planning in time: Windows and durations for activities and goals. PAMI, 5, 246-267. 2. Allen, J. F. (1984). Towards a general theory of action and time. AIJ, 23, 123-154. 3. Dean, T., Kanazawa, K., and Shewchuk, J. (1990). Prediction, observation and estimation in planning and control. In 5th IEEE International Symposium on Intelligent Control, Vol. 2, pp. 645-650. 4. Tate, A. and Whiter, A. M. (1984). Planning with multiple resource constraints and an application to a naval planning problem. In Proc. First Conference on AI Applications, pp. 410-416. 5. Wilkins, D. E. (1988). Practical Planning: Extending the AI Planning Paradigm. Morgan Kaufmann. 6. Wilkins, D. E. (1990). Can AI planners solve practical problems? Computational Intelligence, 6(4), 232-246. 7. Do, M. B. and Kambhampati, S. (2003). Planning as constraint satisfaction: solving the planning graph by compiling it into CSP. AIJ, 132(2), 151-182. 8. Haslum, P. and Geffner, H. (2001). Heuristic planning with time and resources. In Proc. IJCAI-01 Workshop on Planning with Resources. 9. Fukunaga, A. S., Rabideau, G., Chien, S., and Yan, D. (1997). ASPEN: A framework for automated planning and scheduling of spacecraft control and operations. In Proc. International Symposium on AI, Robotics and Automation in Space, pp. 181-187. 10. Jonsson, A., Morris, P., Muscettola, N., Rajan, K., and Smith, B. (2000). Planning in interplanetary space: Theory and practice. In AIPS-00, pp. 177-186. 11. Ghallab, M. and Laruelle, H. (1994). Representation and control in IxTeT, a temporal planner. In AIPS-94, pp. 61-67. 12. Fox, M. S., Allen, B., and Strohm, G. (1982). Job shop scheduling: An investigation in constraint directed reasoning. In AAAI-82, pp. 155-158. 13. Fox, M. S. (1990). Constraint-guided scheduling: A short history of research at CMU. Computers in Industry, 14(1–3), 79-88 14. Descotte, Y. and Latombe, J.-C. (1985). Making compromises among antagonist constraints in a planner. AIJ, 27, 183–217. 15. Cushing,W., Kambhampati, S.,Mausam, and Weld, D. S. (2007). When is temporal planning really temporal? In IJCAI-07. 16. Lawler, E. L., Lenstra, J. K., Kan, A., and Shmoys, D. B. (1993). Sequencing and scheduling: Algorithms and complexity. In Graves, S. C., Zipkin, P. H., and Kan, A. H. G. R. (Eds.), Logistics of Production and Inventory: Handbooks in Operations Research and Management Science, Volume 4, pp. 445 - 522. North-Holland. 17. Pinedo, M. (2008). Scheduling: Theory, Algorithms, and Systems. Springer Verlag. 18. Blazewicz, J., Ecker, K., Pesch, E., Schmidt, G., and Weglarz, J. (2007). Handbook on Scheduling: Models and Methods for Advanced Planning (International Handbooks on Information Systems). Springer-Verlag New York, Inc. 19. Sacerdoti, E. D. (1974). Planning in a hierarchy of abstraction spaces. AIJ, 5(2), 115–135. 20. Sacerdoti, E. D. (1977). A Structure for Plans and Behavior. Elsevier/North-Holland 21. Yang, Q. (1990). Formalizing planning knowledge for hierarchical planning. Computational Intelligence, 6, 12–24. 22. Erol, K., Hendler, J., and Nau, D. S. (1994). HTN planning: Complexity and expressivity. In AAAI-94, pp. 1123–1128. 23. Erol, K., Hendler, J., and Nau, D. S. (1996). Complexity results for HTN planning. AIJ, 18(1), 69–93. 24. Marthi, B., Russell, S. J., and Wolfe, J. (2007). Angelic semantics for high-level actions. In ICAPS-07. 25. Marthi, B., Russell, S. J., and Wolfe, J. (2008). Angelic hierarchical planning: Optimal and online algorithms. In ICAPS-08. 26. Kambhampati, S., Mali, A. D., and Srivastava, B. (1998). Hybrid planning for partially hierarchical domains. In AAAI-98, pp. 882–888. 27. Laird, J., Rosenbloom, P. S., and Newell, A. (1986). Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1, 11–46. 28. Carbonell, J. G., Knoblock, C. A., and Minton, S. (1989). PRODIGY: An integrated architecture for planning and learning. Technical report CMU-CS- 89-189, Computer Science Department, Carnegie- Mellon University. 29. Carbonell, J. G. (1983). Derivational analogy and its role in problem solving. In AAAI-83, pp. 64–69. 30. Alterman, R. (1988). Adaptive planning. Cognitive Science, 12, 393–422. 31. Hammond, K. (1989). Case-Based Planning: Viewing Planning as a Memory Task. Academic Press. 32. Kambhampati, S. (1994). Exploiting causal structure to control retrieval and refitting during plan reuse. Computational Intelligence, 10, 213–244 33. Goldman, R. and Boddy, M. (1996). Expressive planning and explicit knowledge. In AIPS-96, pp. 110–117. 34. Goldman, R. and Boddy, M. (1996). Expressive planning and explicit knowledge. In AIPS-96, pp. 110–117. 35. Smith, D. E. and Weld, D. S. (1998). Conformant Graphplan. In AAAI-98, pp. 889–896. 36. Rintanen, J. (1999). Improvements to the evaluation of quantified Boolean formulae. In IJCAI-99, pp. 1192–1197. 37. Bonet, B. and Geffner, H. (2005). An algorithm better than AO∗? In AAAI-05. 38. Brooks, R. A. (1986). A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, 2, 14–23. 39. Agre, P. E. and Chapman, D. (1987). Pengi: an implementation of a theory of activity. In IJCAI-87, pp. 268–272. 40. Schoppers, M. J. (1987). Universal plans for reactive robots in unpredictable environments. In IJCAI- 87, pp. 1039–1046. 41. Schoppers, M. J. (1989). In defense of reaction plans as caches. AIMag, 10(4), 51–60. |
Norvig I Peter Norvig Stuart J. Russell Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010 |
Environment | Russell | Norvig I 401 Environment/planning/real world/representation/artificial intelligence/Norvig/Russell: algorithms for planning (…) extend both the representation language and the way the planner interacts with the environment. >Planning/Norvig, >Agents/Norvig. New: [we now have] a) actions with duration and b) plans that are organized hierarchically. Hierarchy: Hierarchy also lends itself to efficient plan construction because the planner can solve a problem at an abstract level before delving into details 1st approach: “plan first, schedule later”: (…) we divide the overall problem into a planning phase in which actions are selected, with some ordering constraints, to meet the goals of the problem, and a later scheduling phase, in which temporal information is added to the plan to ensure that it meets resource and deadline constraints. Norvig I 404 Critical path: Mathematically speaking, critical-path problems are easy to solve because they are defined as a conjunction of linear inequalities on the start and end times. When we introduce resource constraints, the resulting constraints on start and end times become more complicated. Norvig I 405 Scheduling: The “cannot overlap” constraint is a disjunction of two linear inequalities, one for each possible ordering. The introduction of disjunctions turns out to make scheduling with resource constraints NP-hard. >NP-Problems. Non-overlapping: [when we assume non-overlapping] every scheduling problem can be solved by a non-overlapping sequence that avoids all resource conflicts, provided that each action is feasible by itself. If a scheduling problem is proving very difficult, however, it may not be a good idea to solve it this way - it may be better to reconsider the actions and constraints, in case that leads to a much easier scheduling problem. Thus, it makes sense to integrate planning and scheduling by taking into account durations and overlaps during the construction of a partial-order plan. Heuristics: partial-order planners can detect resource constraint violations in much the same way they detect conflicts with causal links. Heuristics can be devised to estimate the total completion time of a plan. This is currently an active area of research (see below). Norvig I 406 Real world planning: AI systems will probably have to do what humans appear to do: plan at higher levels of abstraction. A reasonable plan for the Hawaii vacation might be “Go to San Francisco airport (…)” ((s) which might be in a different direction). (…) planning can occur both before and during the execution of the plan (…). Solution: hierarchical decomposition: hierarchical task networks (HTN). Norvig I 407 a high-level plan achieves the goal from a given state if at least one of its implementations achieves the goal from that state. The “at least one” in this definition is crucial - not all implementations need to achieve the goal, because the agent gets Norvig I 408 to decide which implementation it will execute. Thus, the set of possible implementations in HTN planning - each of which may have a different outcome - is not the same as the set of possible outcomes in nondeterministic planning. It can be shown that the right collection of HLAs can result in the time complexity of blind search dropping from exponential in the solution depth to linear in the solution depth, although devising such a collection of HLAs may be a nontrivial task in itself. Norvig I 409 Plan library: The key to HTN planning, then, is the construction of a plan library containing known methods for implementing complex, high-level actions. One method of constructing the library is to learn the methods from problem-solving experience. >Representation/AI research, >Learning/AI research. Learning/AI: In this way, the agent can become more and more competent over time as new methods are built on top of old methods. One important aspect of this learning process is the ability to generalize the methods that are constructed, eliminating detail that is specific to the problem instance (…). Norvig I 410 Nondeterministic action: problem: downward refinement is much too conservative for a real world environment. See >Terminology/Norvig for “demonic nondeterminism” and “angelic nondeterminism”. Norvig I 411 Reachable sets: The key idea is that the agent can choose which element of the reachable set it ends up in when it executes the HLA; thus, an HLA with multiple refinements is more “powerful” than the same HLA (hig level action) with fewer refinements. The notion of reachable sets yields a straightforward algorithm: search among highlevel plans, looking for one whose reachable set intersects the goal; once that happens, the algorithm can commit to that abstract plan, knowing that it works, and focus on refining the plan further. Norvig I 415 Unknown environment/planning/nondeterministic domains: [problems here are] sensorless planning (also known as conformant planning) for environments with no observations; contingency planning for partially observable and nondeterministic environments; and online planning and replanning for unknown environments. Norvig I 417 Sensorless planning: In classical planning, where the closed-world assumption is made, we would assume that any fluent not mentioned in a state is false, but in sensorless (and partially observable) planning we have to switch to an open-world assumption in which states contain both positive and negative fluents, and if a fluent does not appear, its value is unknown. Thus, the belief state corresponds exactly to the set of possible worlds that satisfy the formula. Norvig I 423 Online replanning: The online agent has a choice of how carefully to monitor the environment. We distinguish three levels: a) Action monitoring: before executing an action, the agent verifies that all the preconditions still hold, b) Plan monitoring: before executing an action, the agent verifies that the remaining plan will still succeed, c) Goal monitoring: before executing an action, the agent checks to see if there is a better set of goals it could be trying to achieve. Norvig I 425 Multi-agent planning: A multibody problem is still a “standard” single-agent problem as long as the relevant sensor information collected by each body can be pooled - either centrally or within each body - to form a common estimate of the world state that then informs the execution of the overall plan; in this case, the multiple bodies act as a single body. When communication constraints make this impossible, we have Norvig I 426 what is sometimes called a decentralized planning problem: (…) the subplan constructed for each body may need to include explicit communicative actions with other bodies. Norvig I 429 Convention: A convention is any constraint on the selection of joint plans. Communication: In the absence of a convention, agents can use communication to achieve common knowledge of a feasible joint plan. Plan recognition: works when a single action (or short sequence of actions) is enough to determine a joint plan unambiguously. Note that communication can work as well with competitive agents as with cooperative ones. Norvig I 430 The most difficult multi-agent problems involve both cooperation with members of one’s own team and competition against members of opposing teams, all without centralized control. Norvig I 431 Time constraints in plans: Planning with time constraints was first dealt with by DEVISER (Vere, 1983(1)). The representation of time in plans was addressed by Allen (1984(2)) and by Dean et al. (1990)(3) in the FORBIN system. NONLIN+ (Tate and Whiter, 1984)(4) and SIPE (Wilkins, 1988(5), 1990(6)) could reason about the allocation of limited resources to various plan steps. Forward state-space search: The two planners SAPA (Do and Kambhampati, 2001)(7) and T4 (Haslum and Geffner, 2001)(8) both used forward state-space search with sophisticated heuristics to handle actions with durations and resources. Human heuristics: An alternative is to use very expressive action languages, but guide them by human-written domain-specific heuristics, as is done by ASPEN (Fukunaga et al., 1997)(9), HSTS (Jonsson et al., 2000)(10), and IxTeT (Ghallab and Laruelle, 1994)(11). Norvig I 432 Hybrid planning-and-scheduling systems: ISIS (Fox et al., 1982(12); Fox, 1990(13)) has been used for job shop scheduling at Westinghouse, GARI (Descotte and Latombe, 1985)(14) planned the machining and construction of mechanical parts, FORBIN was used for factory control, and NONLIN+ was used for naval logistics planning. We chose to present planning and scheduling as two separate problems; (Cushing et al., 2007)(15) show that this can lead to incompleteness on certain problems. Scheduling: The literature on scheduling is presented in a classic survey article (Lawler et al., 1993)(16), a recent book (Pinedo, 2008)(17), and an edited handbook (Blazewicz et al., 2007)(18). Abstraction hierarchy: The ABSTRIPS system (Sacerdoti, 1974)(19) introduced the idea of an abstraction hierarchy, whereby planning at higher levels was permitted to ignore lower-level preconditions of actions in order to derive the general structure of a working plan. Austin Tate’s Ph.D. thesis (1975b) and work by Earl Sacerdoti (1977)(20) developed the basic ideas of HTN planning in its modern form. Many practical planners, including O-PLAN and SIPE, are HTN planners. Yang (1990)(21) discusses properties of actions that make HTN planning efficient. Erol, Hendler, and Nau (1994(22), 1996(23)) present a complete hierarchical decomposition planner as well as a range of complexity results for pure HTN planners. Our presentation of HLAs and angelic semantics is due to Marthi et al. (2007(24), 2008(25)). Kambhampati et al. (1998)(26) have proposed an approach in which decompositions are just another form of plan refinement, similar to the refinements for non-hierarchical partial-order planning. Explanation-based learning: The technique of explanation-based learning (…) has been applied in several systems as a means of generalizing previously computed plans, including SOAR (Laird et al., 1986)(27) and PRODIGY (Carbonell et al., 1989)(28). Case-based planning: An alternative approach is to store previously computed plans in their original form and then reuse them to solve new, similar problems by analogy to the original problem. This is the approach taken by the field called case-based planning (Carbonell, 1983(29); Alterman, 1988(30); Hammond, 1989(31)). Kambhampati (1994)(32) argues that case-based planning should be analyzed as a form of refinement planning and provides a formal foundation for case-based partial-order planning. Norvig I 433 Conformant planning: Goldman and Boddy (1996)(33) introduced the term conformant planning, noting that sensorless plans are often effective even if the agent has sensors. The first moderately efficient conformant planner was Smith and Weld’s (1998)(34) Conformant Graphplan or CGP. Ferraris and Giunchiglia (2000)(35) and Rintanen (1999)(36) independently developed SATPLAN-based conformant planners. Bonet and Geffner (2000)(37) describe a conformant planner based on heuristic search in the space of >belief states (…). Norvig I 434 Reactive planning: In the mid-1980s, pessimism about the slow run times of planning systems led to the proposal of reflex agents called reactive planning systems (Brooks, 1986(38); Agre and Chapman, 1987)(39). PENGI (Agre and Chapman, 1987)(39) could play a (fully observable) video game by using Boolean circuits combined with a “visual” representation of current goals and the agent’s internal state. Policies: “Universal plans” (Schoppers, 1987(40), 1989(41)) were developed as a lookup table method for reactive planning, but turned out to be a rediscovery of the idea of policies that had long been used in Markov decision processes (…). >Open Universe/AI research). 1. Vere, S. A. (1983). Planning in time: Windows and durations for activities and goals. PAMI, 5, 246-267. 2. Allen, J. F. (1984). Towards a general theory of action and time. AIJ, 23, 123-154. 3. Dean, T., Kanazawa, K., and Shewchuk, J. (1990). Prediction, observation and estimation in planning and control. In 5th IEEE International Symposium on Intelligent Control, Vol. 2, pp. 645-650. 4. Tate, A. and Whiter, A. M. (1984). Planning with multiple resource constraints and an application to a naval planning problem. In Proc. First Conference on AI Applications, pp. 410-416. 5. Wilkins, D. E. (1988). Practical Planning: Extending the AI Planning Paradigm. Morgan Kaufmann. 6. Wilkins, D. E. (1990). Can AI planners solve practical problems? Computational Intelligence, 6(4), 232-246. 7. Do, M. B. and Kambhampati, S. (2003). Planning as constraint satisfaction: solving the planning graph by compiling it into CSP. AIJ, 132(2), 151-182. 8. Haslum, P. and Geffner, H. (2001). Heuristic planning with time and resources. In Proc. IJCAI-01 Workshop on Planning with Resources. 9. Fukunaga, A. S., Rabideau, G., Chien, S., and Yan, D. (1997). ASPEN: A framework for automated planning and scheduling of spacecraft control and operations. In Proc. International Symposium on AI, Robotics and Automation in Space, pp. 181-187. 10. Jonsson, A., Morris, P., Muscettola, N., Rajan, K., and Smith, B. (2000). Planning in interplanetary space: Theory and practice. In AIPS-00, pp. 177-186. 11. Ghallab, M. and Laruelle, H. (1994). Representation and control in IxTeT, a temporal planner. In AIPS-94, pp. 61-67. 12. Fox, M. S., Allen, B., and Strohm, G. (1982). Job shop scheduling: An investigation in constraint directed reasoning. In AAAI-82, pp. 155-158. 13. Fox, M. S. (1990). Constraint-guided scheduling: A short history of research at CMU. Computers in Industry, 14(1–3), 79-88 14. Descotte, Y. and Latombe, J.-C. (1985). Making compromises among antagonist constraints in a planner. AIJ, 27, 183–217. 15. Cushing,W., Kambhampati, S.,Mausam, and Weld, D. S. (2007). When is temporal planning really temporal? In IJCAI-07. 16. Lawler, E. L., Lenstra, J. K., Kan, A., and Shmoys, D. B. (1993). Sequencing and scheduling: Algorithms and complexity. In Graves, S. C., Zipkin, P. H., and Kan, A. H. G. R. (Eds.), Logistics of Production and Inventory: Handbooks in Operations Research and Management Science, Volume 4, pp. 445 - 522. North-Holland. 17. Pinedo, M. (2008). Scheduling: Theory, Algorithms, and Systems. Springer Verlag. 18. Blazewicz, J., Ecker, K., Pesch, E., Schmidt, G., and Weglarz, J. (2007). Handbook on Scheduling: Models and Methods for Advanced Planning (International Handbooks on Information Systems). Springer-Verlag New York, Inc. 19. Sacerdoti, E. D. (1974). Planning in a hierarchy of abstraction spaces. AIJ, 5(2), 115–135. 20. Sacerdoti, E. D. (1977). A Structure for Plans and Behavior. Elsevier/North-Holland 21. Yang, Q. (1990). Formalizing planning knowledge for hierarchical planning. Computational Intelligence, 6, 12–24. 22. Erol, K., Hendler, J., and Nau, D. S. (1994). HTN planning: Complexity and expressivity. In AAAI-94, pp. 1123–1128. 23. Erol, K., Hendler, J., and Nau, D. S. (1996). Complexity results for HTN planning. AIJ, 18(1), 69–93. 24. Marthi, B., Russell, S. J., and Wolfe, J. (2007). Angelic semantics for high-level actions. In ICAPS-07. 25. Marthi, B., Russell, S. J., and Wolfe, J. (2008). Angelic hierarchical planning: Optimal and online algorithms. In ICAPS-08. 26. Kambhampati, S., Mali, A. D., and Srivastava, B. (1998). Hybrid planning for partially hierarchical domains. In AAAI-98, pp. 882–888. 27. Laird, J., Rosenbloom, P. S., and Newell, A. (1986). Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1, 11–46. 28. Carbonell, J. G., Knoblock, C. A., and Minton, S. (1989). PRODIGY: An integrated architecture for planning and learning. Technical report CMU-CS- 89-189, Computer Science Department, Carnegie- Mellon University. 29. Carbonell, J. G. (1983). Derivational analogy and its role in problem solving. In AAAI-83, pp. 64–69. 30. Alterman, R. (1988). Adaptive planning. Cognitive Science, 12, 393–422. 31. Hammond, K. (1989). Case-Based Planning: Viewing Planning as a Memory Task. Academic Press. 32. Kambhampati, S. (1994). Exploiting causal structure to control retrieval and refitting during plan reuse. Computational Intelligence, 10, 213–244 33. Goldman, R. and Boddy, M. (1996). Expressive planning and explicit knowledge. In AIPS-96, pp. 110–117. 34. Goldman, R. and Boddy, M. (1996). Expressive planning and explicit knowledge. In AIPS-96, pp. 110–117. 35. Smith, D. E. and Weld, D. S. (1998). Conformant Graphplan. In AAAI-98, pp. 889–896. 36. Rintanen, J. (1999). Improvements to the evaluation of quantified Boolean formulae. In IJCAI-99, pp. 1192–1197. 37. Bonet, B. and Geffner, H. (2005). An algorithm better than AO∗? In AAAI-05. 38. Brooks, R. A. (1986). A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, 2, 14–23. 39. Agre, P. E. and Chapman, D. (1987). Pengi: an implementation of a theory of activity. In IJCAI-87, pp. 268–272. 40. Schoppers, M. J. (1987). Universal plans for reactive robots in unpredictable environments. In IJCAI- 87, pp. 1039–1046. 41. Schoppers, M. J. (1989). In defense of reaction plans as caches. AIMag, 10(4), 51–60. |
Russell I B. Russell/A.N. Whitehead Principia Mathematica Frankfurt 1986 Russell II B. Russell The ABC of Relativity, London 1958, 1969 German Edition: Das ABC der Relativitätstheorie Frankfurt 1989 Russell IV B. Russell The Problems of Philosophy, Oxford 1912 German Edition: Probleme der Philosophie Frankfurt 1967 Russell VI B. Russell "The Philosophy of Logical Atomism", in: B. Russell, Logic and KNowledge, ed. R. Ch. Marsh, London 1956, pp. 200-202 German Edition: Die Philosophie des logischen Atomismus In Eigennamen, U. Wolf (Hg) Frankfurt 1993 Russell VII B. Russell On the Nature of Truth and Falsehood, in: B. Russell, The Problems of Philosophy, Oxford 1912 - Dt. "Wahrheit und Falschheit" In Wahrheitstheorien, G. Skirbekk (Hg) Frankfurt 1996 Norvig I Peter Norvig Stuart J. Russell Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010 |
Inference | AI Research | Norvig I 471 Reasoning/inference/artificial intelligence/AI research/Norvig/Russell: The three main formalisms for dealing with nonmonotonic inference—circumscription (McCarthy, 1980)(1), default logic (Reiter, 1980(2)), and modal nonmonotonic logic (McDermott and Doyle, 1980)(3) - were all introduced in one special issue of the AI Journal. Delgrande and Schaub (2003)(4) discuss the merits of the variants, given 25 years of hindsight. Answer set programming can be seen as an extension of negation as failure or as a refinement of circumscription; Norvig I 472 the underlying theory of stable model semantics was introduced by Gelfond and Lifschitz (1988)(5), and the leading answer set programming systems are DLV (Eiter et al., 1998)(6) and SMODELS (Niemel¨a et al., 2000)(7). The disk drive example comes from the SMODELS user manual (Syrjanen, 2000)(8). Lifschitz (2001)(9) discusses the use of answer set programming for planning. Brewka et al. (1997)(10) give a good overview of the various approaches to nonmonotonic logic. Clark (1978)(11) covers the negation-as-failure approach to logic programming and Clark completion. Van Emden and Kowalski (1976)(12) show that every Prolog program without negation has a unique minimal model. Recent years have seen renewed interest in applications of nonmonotonic logics to large-scale knowledge representation systems. The BENINQ systems for handling insurance-benefit inquiries was perhaps the first commercially successful application of a nonmonotonic inheritance system (Morgenstern, 1998)(13). Lifschitz (2001)(9) discusses the application of answer set programming to planning. Norvig I 473 Spatial reasoning: The earliest serious attempt to capture commonsense reasoning about space appears in the work of Ernest Davis (1986(14), 1990(15)). The region connection calculus of Cohn et al. (1997)(16) supports a form of qualitative spatial reasoning and has led to new kinds of geographical information systems; see also (Davis, 2006)(17). As with qualitative physics, an agent can go a long way, so to speak, without resorting to a full metric representation. Psychological reasoning: Psychological reasoning involves the development of a working psychology for artificial agents to use in reasoning about themselves and other agents. This is often based on so-called folk psychology, the theory that humans in general are believed to use in reasoning about themselves and other humans. ((s) Cf. >Folk psychology/Philosophical theories). When AI researchers provide their artificial agents with psychological theories for reasoning about other agents, the theories are frequently based on the researchers’ description of the logical agents’ own design. Psychological reasoning is currently most useful within the context of natural language understanding, where divining the speaker’s intentions is of paramount importance. Minker (2001)(18) collects papers by leading researchers in knowledge representation, summarizing 40 years of work in the field. The proceedings of the international conferences on Principles of Knowledge Representation and Reasoning provide the most up-to-date sources for work in this area. 1. McCarthy, J. (1980). Circumscription: A form of non-monotonic reasoning. AIJ, 13(1–2), 27–39. 2. Reiter, R. (1980). A logic for default reasoning. AIJ, 13(1–2), 81–132. 3. McDermott, D. and Doyle, J. (1980). Nonmonotonic logic: i. AIJ, 13(1–2), 41–72. 4. Delgrande, J. and Schaub, T. (2003). On the relation between Reiter’s default logic and its (major) variants. In Seventh European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty, pp. 452–463. 5. Gelfond, M. and Lifschitz, V. (1988). Compiling circumscriptive theories into logic programs. In Non- Monotonic Reasoning: 2nd International Workshop Proceedings, pp. 74–99. 6. Eiter, T., Leone, N., Mateis, C., Pfeifer, G., and Scarcello, F. (1998). The KR system dlv: Progress report, comparisons and benchmarks. In KR-98, pp. 406–417. 7. Niemela, I., Simons, P., and Syrjanen, T. (2000). Smodels: A system for answer set programming. In Proc. 8th International Workshop on Non-Monotonic Reasoning. 8. Syrjanen, T. (2000). Lparse 1.0 user’s manual.saturn.tcs.hut.fi/Software/smodels. 9. Lifschitz, V. (2001). Answer set programming and plan generation. AIJ, 138(1–2), 39–54. 10. Brewka, G., Dix, J., and Konolige, K. (1997). Nononotonic Reasoning: An Overview. CSLI Publications. 11. Clark, K. L. (1978). Negation as failure. In Gallaire, H. and Minker, J. (Eds.), Logic and Data Bases, pp. 293–322. Plenum. 12. Van Emden, M. H. and Kowalski, R. (1976). The semantics of predicate logic as a programming language. JACM, 23(4), 733–742. 13. Morgenstern, L. (1998). Inheritance comes of age: Applying nonmonotonic techniques to problems in industry. AIJ, 103, 237–271 14. Davis, E. (1986). Representing and Acquiring Geographic Knowledge. Pitman and Morgan Kaufmann. 15. Davis, E. (1990). Representations of Commonsense Knowledge. Morgan Kaufmann 16. Cohn, A. G., Bennett, B., Gooday, J. M., and Gotts, N. (1997). RCC: A calculus for region based qualitative spatial reasoning. GeoInformatica, 1, 275–316. 17. Davis, E. (2006). The expressivity of quantifying over regions. J. Logic and Computation, 16, 891– 916. 18. Minker, J. (2001). Logic-Based Artificial Intelligence. Kluwer Norvig I 570 Inference/temporal models/AI research/Norvig/Russell: (…) the basic inference tasks that must be solved: a) Filtering: This is the task of computing the belief state—the posterior distribution over the most recent state - given all evidence to date. Filtering is also called state estimation. >Belief states/Norvig. b) Prediction: This is the task of computing the posterior distribution over the future state, given all evidence to date. That is, we wish to compute P(Xt+k | e1:t) for some k > 0. Norvig I 571 c) Smoothing: This is the task of computing the posterior distribution over a past state, given all evidence up to the present. That is, we wish to compute P(Xk | e1:t) for some k such that 0 ≤ k < t. d) Most likely explanation: Given a sequence of observations, we might wish to find the sequence of states that is most likely to have generated those observations. That is, we wish to compute argmaxx1:t P(x1:t | e1:t). In addition to these inference tasks (…): Learning: The transition and sensor models, if not yet known, can be learned from observations. Just as with static >Bayesian networks, dynamic Bayes net learning can be done as a by-product of inference. Inference provides an estimate of what transitions actually occurred and of what states generated the sensor readings, and these estimates can be used to update the models. >Change/AI research, >Uncertainty/AI research. Norvig I 605 Ad a) The particle filtering algorithm (…) has a particularly interesting history. The first sampling algorithms for particle filtering (also called sequential Monte Carlo methods) were developed in the control theory community by Handschin and Mayne (1969)(1), and the resampling idea that is the core of particle filtering appeared in a Russian control journal (Zaritskii et al., 1975)(2). It was later reinvented in statistics as sequential importance sampling resampling, or SIR (Rubin, 1988(3); Liu and Chen, 1998(4)), in control theory as particle filtering (Gordon et al., 1993(5); Gordon, 1994(6)), in AI as survival of the fittest (Kanazawa et al., 1995)(7), and in computer vision as condensation (Isard and Blake, 1996)(8). The paper by Kanazawa et al. (1995)(7) includes an improvement called evidence reversal whereby the state at time t+1 is sampled conditional on both the state at time t and the evidence at time t+1. This allows the evidence to influence sample generation directly and was proved by Doucet (1997)(9) and Liu and Chen (1998)(4) to reduce the approximation error. Particle filtering has been applied in many areas, including tracking complex motion patterns in video (Isard and Blake, 1996)(8), predicting the stock market (de Freitas et al., 2000)(10), and diagnosing faults on planetary rovers (Verma et al., 2004)(11). A variant called the Rao-Blackwellized particle filter or RBPF (Doucet et al., 2000(12); Murphy and Russell, 2001)(13) applies particle filtering to a subset of state variables and, for each particle, performs exact inference on the remaining variables conditioned on the value sequence in the particle. In some cases RBPF works well with thousands of state variables. >Utility/AI research, >Utility theory/Norvig, >Rationality/AI research, >Certainty ffect/Kahneman/Tversky, >Ambiguity/Kahneman/Tversky. 1. Handschin, J. E. and Mayne, D. Q. (1969). Monte Carlo techniques to estimate the conditional expectation in multi-stage nonlinear filtering. Int. J. Control, 9(5), 547–559. 2. Zaritskii, V. S., Svetnik, V. B., and Shimelevich, L. I. (1975). Monte-Carlo technique in problems of optimal information processing. Automation and Remote Control, 36, 2015–22. 3. Rubin, D. (1988). Using the SIR algorithm to simulate posterior distributions. In Bernardo, J. M., de Groot,M. H., Lindley, D. V., and Smith, A. F. M. (Eds.), Bayesian Statistics 3, pp. 395–402. Oxford University Press. 4. Liu, J. S. and Chen, R. (1998). Sequential Monte Carlo methods for dynamic systems. JASA, 93, 1022–1031. 5. Gordon, N., Salmond, D. J., and Smith, A. F. M. (1993). Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEE Proceedings F (Radar and Signal Processing), 140(2), 107–113. 6. Gordon, N. (1994). Bayesian methods for tracking. Ph.D. thesis, Imperial College. 7. Kanazawa, K., Koller, D., and Russell, S. J. (1995). Stochastic simulation algorithms for dynamic probabilistic networks. In UAI-95, pp. 346–351. 8. Isard, M. and Blake, A. (1996). Contour tracking by stochastic propagation of conditional density. In ECCV, pp. 343–356. 9. Doucet, A. (1997). Monte Carlo methods for Bayesian estimation of hidden Markov models: Application to radiation signals. Ph.D. thesis, Université de Paris-Sud. 10. de Freitas, J. F. G., Niranjan, M., and Gee, A. H. (2000). Sequential Monte Carlo methods to train neural network models. Neural Computation, 12(4), 933–953. 11. Verma, V., Gordon, G., Simmons, R., and Thrun, S. (2004). Particle filters for rover fault diagnosis. IEEE Robotics and Automation Magazine, June. 12. Doucet, A., de Freitas, N., Murphy, K., and Russell, S. J. (2000). Rao-blackwellised particle filtering for dynamic bayesian networks. In UAI-00. 13. Murphy, K. and Russell, S. J. (2001). Rao-blackwellised particle filtering for dynamic Bayesian networks. In Doucet, A., de Freitas, N., and Gordon, N. J. (Eds.), Sequential Monte Carlo Methods in Practice. Springer-Verlag. |
Norvig I Peter Norvig Stuart J. Russell Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010 |
Method | Tarski | Berka I 401 Consistent-proof/Gödel: cannot be performed if the meta language does not contain variables of higher type. >Metalanguage, >Expressivity, cf. >Type theory. Undecidability: Undecidability is eliminated when one enriches the examined theory (object language) with variables of higher type.(1) >Decidability. 1. A.Tarski, „Grundlegung der wissenschaftlichen Semantik“, in: Actes du Congrès International de Philosophie Scientifique, Paris 1935, VOl. III, ASI 390, Paris 1936, pp. 1-8 --- I 462 Meta language/Tarski: is our real examination object. ((s) because of the application conditions of the truth concept). I 464 Meta language/Tarski: 2nd category of expressions: specific terms of structural-descriptive character. >Structural-descriptive name. Names of specific signs and expressions of the class calculus, names of classes names of sequences of such expressions and of structural relations between them, Any expression of the considered language (object language) one can allocate - on the one hand an individual name of this expression, and - on the other hand an expression that is the translation of this expression in the meta language. That is decisive for the construction of the truth-definition. >Truth definition/Tarski. I 464 Name/translation/meta language/object language/Tarski: difference: an expression of the object language can in the meta language a) be given a name, or b) be a translation. Berka I 525 Morphology/Tarski: our meta language includes here the entire object language - that is, for us only logical expressions of the general class theory. - That is, only structural-descriptive terms. >Homophony. So we have the morphology of the language, that is, even the concept of inference is traced back. I 526 Thus we have justified the logic of this studied science as a part of the morphology.(2) >Description levels, >Semantic closure. 2. A.Tarski, Der Wahrheitsbegriff in den formalisierten Sprachen, Commentarii Societatis philosophicae Polonorum. Vol 1, Lemberg 1935 |
Tarski I A. Tarski Logic, Semantics, Metamathematics: Papers from 1923-38 Indianapolis 1983 Berka I Karel Berka Lothar Kreiser Logik Texte Berlin 1983 |
Reasoning | AI Research | Norvig I 471 Reasoning/inference/artificial intelligence/AI research/Norvig/Russell: The three main formalisms for dealing with nonmonotonic inference—circumscription (McCarthy, 1980)(1), default logic (Reiter, 1980(2)), and modal nonmonotonic logic (McDermott and Doyle, 1980)(3) - were all introduced in one special issue of the AI Journal. Delgrande and Schaub (2003)(4) discuss the merits of the variants, given 25 years of hindsight. Answer set programming can be seen as an extension of negation as failure or as a refinement of circumscription; Norvig I 472 the underlying theory of stable model semantics was introduced by Gelfond and Lifschitz (1988)(5), and the leading answer set programming systems are DLV (Eiter et al., 1998)(6) and SMODELS (Niemel¨a et al., 2000)(7). The disk drive example comes from the SMODELS user manual (Syrjanen, 2000)(8). Lifschitz (2001)(9) discusses the use of answer set programming for planning. Brewka et al. (1997)(10) give a good overview of the various approaches to nonmonotonic logic. Clark (1978)(11) covers the negation-as-failure approach to logic programming and Clark completion. Van Emden and Kowalski (1976)(12) show that every Prolog program without negation has a unique minimal model. Recent years have seen renewed interest in applications of nonmonotonic logics to large-scale knowledge representation systems. The BENINQ systems for handling insurance-benefit inquiries was perhaps the first commercially successful application of a nonmonotonic inheritance system (Morgenstern, 1998)(13). Lifschitz (2001)(9) discusses the application of answer set programming to planning. Norvig I 473 Spatial reasoning: The earliest serious attempt to capture commonsense reasoning about space appears in the work of Ernest Davis (1986(14), 1990(15)). The region connection calculus of Cohn et al. (1997)(16) supports a form of qualitative spatial reasoning and has led to new kinds of geographical information systems; see also (Davis, 2006)(17). As with qualitative physics, an agent can go a long way, so to speak, without resorting to a full metric representation. Psychological reasoning: Psychological reasoning involves the development of a working psychology for artificial agents to use in reasoning about themselves and other agents. This is often based on so-called folk psychology, the theory that humans in general are believed to use in reasoning about themselves and other humans. ((s) Cf. >Folk psychology/Philosophical theories). When AI researchers provide their artificial agents with psychological theories for reasoning about other agents, the theories are frequently based on the researchers’ description of the logical agents’ own design. Psychological reasoning is currently most useful within the context of natural language understanding, where divining the speaker’s intentions is of paramount importance. Minker (2001)(18) collects papers by leading researchers in knowledge representation, summarizing 40 years of work in the field. The proceedings of the international conferences on Principles of Knowledge Representation and Reasoning provide the most up-to-date sources for work in this area. 1. McCarthy, J. (1980). Circumscription: A form of non-monotonic reasoning. AIJ, 13(1–2), 27–39. 2. Reiter, R. (1980). A logic for default reasoning. AIJ, 13(1–2), 81–132. 3. McDermott, D. and Doyle, J. (1980). Nonmonotonic logic: i. AIJ, 13(1–2), 41–72. 4. Delgrande, J. and Schaub, T. (2003). On the relation between Reiter’s default logic and its (major) variants. In Seventh European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty, pp. 452–463. 5. Gelfond, M. and Lifschitz, V. (1988). Compiling circumscriptive theories into logic programs. In Non- Monotonic Reasoning: 2nd International Workshop Proceedings, pp. 74–99. 6. Eiter, T., Leone, N., Mateis, C., Pfeifer, G., and Scarcello, F. (1998). The KR system dlv: Progress report, comparisons and benchmarks. In KR-98, pp. 406–417. 7. Niemela, I., Simons, P., and Syrjanen, T. (2000). Smodels: A system for answer set programming. In Proc. 8th International Workshop on Non-Monotonic Reasoning. 8. Syrjanen, T. (2000). Lparse 1.0 user’s manual.saturn.tcs.hut.fi/Software/smodels. 9. Lifschitz, V. (2001). Answer set programming and plan generation. AIJ, 138(1–2), 39–54. 10. Brewka, G., Dix, J., and Konolige, K. (1997). Nononotonic Reasoning: An Overview. CSLI Publications. 11. Clark, K. L. (1978). Negation as failure. In Gallaire, H. and Minker, J. (Eds.), Logic and Data Bases, pp. 293–322. Plenum. 12. Van Emden, M. H. and Kowalski, R. (1976). The semantics of predicate logic as a programming language. JACM, 23(4), 733–742. 13. Morgenstern, L. (1998). Inheritance comes of age: Applying nonmonotonic techniques to problems in industry. AIJ, 103, 237–271 14. Davis, E. (1986). Representing and Acquiring Geographic Knowledge. Pitman and Morgan Kaufmann. 15. Davis, E. (1990). Representations of Commonsense Knowledge. Morgan Kaufmann 16. Cohn, A. G., Bennett, B., Gooday, J. M., and Gotts, N. (1997). RCC: A calculus for region based qualitative spatial reasoning. GeoInformatica, 1, 275–316. 17. Davis, E. (2006). The expressivity of quantifying over regions. J. Logic and Computation, 16, 891– 916. 18. Minker, J. (2001). Logic-Based Artificial Intelligence. Kluwer |
Norvig I Peter Norvig Stuart J. Russell Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010 |
Recursion | Tarski | Skirbekk I 156 Recursion/recursive method/Tarski: starting from simple propositional calculus specifying the operations with which we construct composite functions. >Functions/Tarski, >Recursive rules. Skirbekk I 157 Recursion/Tarski: problem: composite statements are constructed from simpler propositional functions, but not always from simpler statements. >Propositional functions. Hence no general recursion is possible. Recursive definition of satisfaction is only possible in a much richer metalanguage (i.e. in metalanguage we have variables of a higher logical type than the in the object language.(1) >Expressivity, >Richness. 1. A.Tarski, „Die semantische Konzeption der Wahrheit und die Grundlagen der Semantik“ (1944) in: G. Skirbekk (ed.) Wahrheitstheorien, Frankfurt 1996 |
Tarski I A. Tarski Logic, Semantics, Metamathematics: Papers from 1923-38 Indianapolis 1983 Skirbekk I G. Skirbekk (Hg) Wahrheitstheorien In Wahrheitstheorien, Gunnar Skirbekk Frankfurt 1977 |
Semantic Closure | Tarski | Skirbekk I 150 Semantically closed/Tarski: is a language it contains the names of the expressions next to each expression. The laws of logic apply. >Expressivity, >Richness, >Names of expressions. Everyday language satisfies these conditions. - Semantically closed languages are inconsistent, that is, one can derive paradoxes in them.(1) 1. A.Tarski, „Die semantische Konzeption der Wahrheit und die Grundlagen der Semantik“ (1944) in: G. Skirbekk (ed.) Wahrheitstheorien, Frankfurt 1996 |
Tarski I A. Tarski Logic, Semantics, Metamathematics: Papers from 1923-38 Indianapolis 1983 Skirbekk I G. Skirbekk (Hg) Wahrheitstheorien In Wahrheitstheorien, Gunnar Skirbekk Frankfurt 1977 |
Validity | Stalnaker | I 148 Validity/expressiveness/modal/quantification/Stalnaker: the validity of the generalization schema is unlike the identity scheme. >Generality, >Generalization. It depends on limitations of the expressiveness of the extensional theory. If the language is richer, some new instances will be no theorems. >Extensions, >Extensionality, >Expressivivity, >Expressibility, >Richness. |
Stalnaker I R. Stalnaker Ways a World may be Oxford New York 2003 |
Disputed term/author/ism | Author Vs Author |
Entry |
Reference |
---|---|---|---|
Wittgenstein | Searle Vs Wittgenstein | Bennett I 192 SearleVsWittgenstein: At least sometimes what we can say, is a function of what we say. The meaning exceeds the intention, it is at least sometimes a matter of convention. Searle I 24 Traditional view of materialism/Searle: … 5. Intelligent behavior and causal relations in which they are, are in some way beings of the mind. Significant relation between mind and behavior exists in different versions: from extreme behavioral view to Wittgenstein. puzzling assertion "An internal process requires external criteria". SearleVsWittgenstein: an inner process such as pain requires nothing! Why should it? I 156 SearleVsWittgenstein: Wittgenstein asks if I, when I come into my room, experience a "process of recognition". He reminds us that such a process does not exist in reality. Searle: He's right. This applies also more or less to my whole experience of the world. I 169 Wittgenstein in the Philosophical Investigations (PU, 1953): bold attempt to tackle the idea of my in 1st person drafted statement on the intellectual were at all reports or descriptions. He suggested to understand such comments in an expressive sense, so that they are no reports or descriptions and the question for any authority was not raised. When I cry out in pain, then no question of my authority is raised. I 170 SearleVsWittgenstein: that failed. While there are such cases, but there are still many cases in which one tries to describe his own state of mind as carefully as possible and to not simply express it. Question: why we do not mean to have the same special authority with respect to other objects and facts in the world? Reason: we distinguish between how things appear to us to be and stand and how they really are. Two questions: first, how it is possible that we may be wrong about our own state of mind? What kind of a "form" has the error, if it is none of the errors we make in regards to appearance or reality with respect to the world in general? I 171 Typical cases: self-deception, misinterpretation and inattention. Self-deception is such a widespread phenomenon that something must be wrong with the proof of its impossibility. The proof goes like this: that xy can deceive, x must have any conviction (p) and the successful attempt to take in y the belief to evoke that not p. However in the case where x is identical to y, it should therefore cause a self-contradictory belief. And that seems to be impossible. Yet we know that self-deception is possible. In such cases, the agent is trying not to think of certain own mental states. I 172 As well as one might interpret a text incorrectly by wrongly composing the text portions, so you can also misinterpret one's own intentional states as you do not recognize their relations with each other. II 76 Rabbit-duck-head: Here we would like to say that the intentional object is the same. We have two visual experiences with two different presented contents but only a single image. Wittgenstein: gets out of the affair by saying that these are various applications of the word "use". SearleVsWittgenstein: probably we see not only objects (of course always under one aspect) but also aspects of objects. Bill loves Sally as a person, but nothing prevents him to love also aspects of Sally. II 192/193 Background/Searle: is not on the periphery of intentionality but pervades the whole network of intentional states. Semantics/knowledge: the knowledge of how words should be used is not semantic! (Otherwise regress) (Vs use theory of meaning, SearleVsWittgenstein). E.g. To walk: "Move first the left foot forward, then the right and then on and on," here the knowledge is not in the semantic contents. II 193/194 Because every semantic content has just the property to be interpreted in various ways. Knowing the correct interpretation can now not be represented as a further semantic content. Otherwise we would need another rule for the correct interpretation of the rule for interpreting the rule for walking. (Regress). Solution: we do not need a rule for walking, we simply walk. Rule/Searle: to perform the speech acts actually according to a rule, we do not need more rules for the interpretation of the rule. III 112 Game/Wittgenstein: no common features of all games. (> Family resemblance). III 113 SearleVsWittgenstein: there are some after all: Def game/elsewhere: the attempt to overcome the obstacles that have been created for the purpose that we try to overcome them. (Searle: that is not by me!). III 150 Reason/action/Wittgenstein: there is simply a way of acting, which needs no reasons. SearleVsWittgenstein: which is not satisfactory because it does not tell us what role the rule structure plays. V 35 Principle of expressivity/Searle: Even in the cases where it is actually impossible to say exactly what I mean, it is always possible to get there, that I can say exactly what I mean. V 36 Understanding/Searle: not everything that can be said can also be understood. That would rule out the possibility of a private language. (SearleVsWittgenstein). The principle of expressivity has far-reaching consequences. We will therefore explain important features of Frege's theory of meaning and significance. V 145 Facts/situations/Searle: misleading: facts about an object. There can be no facts about an independently by situations identified object! Otherwise you would approach traditional substance. SearleVsWittgenstein: in Tractatus this is the case. Wittgenstein: Objects could be named regardless of situations. SearleVsWittgenstein: such a language could not exist! Objects cannot be named regardless of the facts. V 190/191 Tautology/SearleVsWittgenstein: tautologies are anything but empty! E.g. "Either he is a fascist or is not." - is very different than "Either he is a communist, or is not." - -.- V 245 SearleVsTractatus/SearleVsWittgenstein: such a false distinction between proper names and certain descriptions can be found in the Tractatus: "the name means the object. The object is its meaning.". (3.203). But from this paradoxes arise: The meaning of the words, it seems, cannot depend on any contingent facts in the world because we can describe the world even when the facts change. Tradition: But the existence of ordinary objects. People, cities, etc. is random and hence also the existence of the meaning of their names! Their names are therefore not the real names! Plato: There must be a class of objects whose existence is not contingent. Their names are the real names (also Plato, Theaithet). IV 50 SearleVsWittgenstein: there are not an infinite number or an indefinite number of language games. IV 89 Lie/SearleVsWittgenstein: no language game that has to be learned, like any other. Each rule has the concept of the offense, so it is not necessary to first learn to follow the rule, and then separately to learn the injury. In this regard the fiction is so much more sophisticated than the lie. Fiction/Searle: Pretending to perform an illocutionary act is the same as E.g. pretend to hit someone (to make the movement). IV 90 E.g. child in the driver's seat of the car pretends to drive (makes the movements). |
Searle I John R. Searle The Rediscovery of the Mind, Massachusetts Institute of Technology 1992 German Edition: Die Wiederentdeckung des Geistes Frankfurt 1996 Searle II John R. Searle Intentionality. An essay in the philosophy of mind, Cambridge/MA 1983 German Edition: Intentionalität Frankfurt 1991 Searle III John R. Searle The Construction of Social Reality, New York 1995 German Edition: Die Konstruktion der gesellschaftlichen Wirklichkeit Hamburg 1997 Searle IV John R. Searle Expression and Meaning. Studies in the Theory of Speech Acts, Cambridge/MA 1979 German Edition: Ausdruck und Bedeutung Frankfurt 1982 Searle V John R. Searle Speech Acts, Cambridge/MA 1969 German Edition: Sprechakte Frankfurt 1983 Searle VII John R. Searle Behauptungen und Abweichungen In Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995 Searle VIII John R. Searle Chomskys Revolution in der Linguistik In Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995 Searle IX John R. Searle "Animal Minds", in: Midwest Studies in Philosophy 19 (1994) pp. 206-219 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 Bennett I Jonathan Bennett "The Meaning-Nominalist Strategy" in: Foundations of Language, 10, 1973, pp. 141-168 In Handlung, Kommunikation, Bedeutung, Georg Meggle Frankfurt/M. 1979 |