Dictionary of Arguments


Philosophical and Scientific Issues in Dispute
 
[german]

Screenshot Tabelle Begriffes

 

Find counter arguments by entering NameVs… or …VsName.

Enhanced Search:
Search term 1: Author or Term Search term 2: Author or Term


together with


The author or concept searched is found in the following 43 entries.
Disputed term/author/ism Author
Entry
Reference
Arithmetics Thiel Thiel I 225
Arithmetics/Lorenzen/Thiel: Arithmetics is the theory in which the infinite occurs in its simplest form, it is essentially nothing more than the theory of the infinite itself. Arithmetics as the theory of the set of signs (e.g. tally-list) is universal in the sense that the properties and relations of any other infinite set of signs can always be "mapped" in some way.
The complexity of matter has led to the fact that a large part of the secondary literature on Gödel has put a lot of nonsense into the world on metaphors such as "reflection", "self-reference", etc.
>Self-reference, cf. >Regis Debray.
I 224
The logical arithmetic full formalism is denoted with F. It contains, among other things, inductive definitions of the counting signs, the variables for them, the rules of quantifier logic and the Dedekind-Peanosian axioms written as rules. >Formalization, >Formalism.
I 226
The derivability or non-derivability of a formula means nothing other than the existence or non-existence of a proof figure or a family tree with A as the final formula. Therefore also the metamathematical statements "derivable", respectively "un-derivable" each reversibly correspond unambiguously to a basic number characterizing them.
>Theorem of Incompleteness/Gödel.
Terminology/Writing: S derivable, $ not derivable.
"$ Ax(x)" is now undoubtedly a correctly defined form of statement, since the count for An(n) is uniquely determined. Either $An(n) is valid or not.
>Derivation, >Derivability.
I 304
The centuries-old dominance of geometry has aftereffects in the use of language. For example "square", "cubic" equations etc. Arithmetics/Thiel: has today become a number theory, its practical part degraded to "calculating", a probability calculus has been added.
>Probability, >Probability law.
I 305
In the vector and tensor calculus, geometry and algebra appear reunited. A new discipline called "invariant theory" emerges, flourishes and disappears completely, only to rise again later.
I 306
Functional analysis: is certainly not a fundamental discipline because of the very high level of conceptual abstraction.
Invariants.
I 307
Bourbaki contrasts the classical "disciplines" with the "modern structures". The theory of prime numbers is closely related to the theory of algebraic curves. Euclidean geometry borders on the theory of integral equations. The ordering principle will be one of the hierarchies of structures, from simple to complicated and from general to particular. >Structures.

T I
Chr. Thiel
Philosophie und Mathematik Darmstadt 1995

Artificial Intelligence Wolfram Brockman I 268
Artificial intelligence/Wolfram: When we consider the future of AI, we need to think about the goals. That’s what humans contribute; that’s what our civilization contributes. The execution of those goals is what we can increasingly automate. >Purposes/Wolfram, >Neural networks/Wolfram.
Brockman I 271
Expert systems/Wolfram: (…) there was a trend toward devices called expert systems, which arose in the late seventies and early eighties. The idea was to have a machine learn the rules that an expert uses and thereby figure out what to do. That petered out. After that, AI became little more than a crazy pursuit. My original belief had been that in order to make a serious computational knowledge system, you first had to build a brainlike device and then feed it knowledge—just as humans learn in standard education. Now I realized that there wasn’t a bright line between what is intelligent and what is simply computational.
Wolfram Alpha: I had assumed that there was some magic mechanism that made us vastly more capable than anything that was just computational. But that assumption was wrong. What I discovered is that you can take a large collection of the world’s knowledge and automatically answer questions on the basis of it, using what are essentially merely computational techniques.
Data mining/Wolfram: (…) what you normally do when you build a program is build it step-by-step. But you can also explore the computational universe and mine technology from that universe.
Brockman I 272
There are all kinds of programs out there, even tiny programs that do complicated things. Computer language/Wolfram: You need a computer language that can represent sophisticated concepts in a way that can be progressively built up and isn’t possible in natural language.
Traditional approach: creating a computer language is to make a language that represents operations that computers intrinsically know how to do: allocating memory, setting values of variables, iterating things, changing program counters, etc.
Solution/WolframVsTradition: make a language that panders not to the computers but to the humans, to take whatever a human thinks of and convert it into some form that the computer can understand.
Brockman I 275
Artificial intelligence/Wolfram. Basic components: physiological recognition, language translation, voice-to-text. These are essentially some of the links to how we make machines that are humanlike in what they do. >Computer language/Wolfram, >Formalization/Wolfram, >Turing Test/Wolfram, >Human machine communication/Wolfram.
Brockman I 277
The AI will know what you intend, and it will be good at figuring out how to get there. More to the point is that there will be an AI that knows your history, and knows that when you’re ordering dinner online you’ll probably want such-and-such, or when you email this person, you should talk to them about such-and-such. More and more, the Als will suggest to us what we should do, and I suspect most of the time people will just go along with that. >Software/Wolfram.
Brockman I 283
The problem of abstract AI is similar to the problem of recognizing extraterrestrial intelligence: How do you determine whether or not it has a purpose? We’ll say things like, «Weil, AI will be intelligent when it can do blah-blah-blah.” But there are many other ways to get to those results. Again, there is no bright line between intelligence and mere computation.

Wolfram, Stephen (2015) „Artificial Intelligence and the Future of Civilization” (edited live interview), in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press.


Brockman I
John Brockman
Possible Minds: Twenty-Five Ways of Looking at AI New York 2019
Axioms Hilbert Berka I 294
Definition/Axiom/Hilbert: the established axioms are at the same time the definitions of the elementary concepts whose relations they regulate. ((s) Hilbert speaks of relationships, not of the use of concepts). >Definitions, >Definability, >Basic concepts.
Independence/Axiom/Hilbert: the question is whether certain statements of individual axioms are mutually dependent, and whether the axioms do not contain common components which must be removed so that the axioms are independent of each other(1).
>Independence.

1. D. Hilbert: Mathematische Probleme, in: Ders. Gesammelte Abhandlungen (1935), Vol. III, pp. 290-329 (gekürzter Nachdruck v. S 299-301).
---
Thiel I 262
We consider the first three axioms of Hilbert: 1. There are exactly two straight lines at each of two distinct points P, Q, which indicate(2) with P and Q.
2. For every line g and to any point P, which does not indicate with it, there is exactly one line that is indicated with P, but with no point of g.
3. There are three points which do not indicate with one and the same straight line.
In Hilbert's original text, instead of points one speaks of "objects of the first kind" instead of straight lines of "objects of the second kind" and instead of the incidence of "basic relation". Thus, the first axiom is now:
For each of two different objects of the first kind, there is precisely one object of the second kind, which is in a basic relation with the first two.
Thiel I 263
If the axioms are transformed quantifier-logically, then only the schematic sign "π" (for the basic relation) is free for substitutions, the others are bound by quantifiers, and can no longer be replaced by individual names of points or lines. >Quantification, >Quantifiers.
They are thus "forms of statements" with "π" as an empty space.
>Propositional functions.
They are not statements like those before Hilbert's axioms, whose truth or falsehood is fixed by the meanings of their constituents.
>Truth values.
In the Hilbert axiom concept (usually used today), axioms are forms of statements or propositional schemata, the components of which must be given a meaning only by interpretation by specifying the variability domains and the basic relation. The fact that this can happen in various ways, shows that the axioms cannot determine the meaning of their components (not their characteristics, as Hilbert sometimes says) themselves by their co-operation in an axiom system.
Thiel I 264
Multiple interpretations are possible: e.g. points lying on a straight line, e.g. the occurrence of characters in character strings, e.g. numbers.
Thiel I 265
All three interpretations are true statements. The formed triples of education regulations are models of our axiom system. The first is an infinite, the two other finite models. >Models, >Infinity.
Thiel I 266
The axioms can be combined by conjunction to form an axiom system. >Conjunction.
Through the relationships, the objects lying in the subject areas are interwoven with each other in the manner determined by the combined axioms. The regions V .. are thereby "structured" (concrete and abstract structures).
>Domains, >Structures (Mathematics).
One and the same structure can be described by different axiom systems. Not only are logically equivalent axiom systems used, but also those whose basic concepts and relations differ, but which can be defined on the basis of two systems of explicit definitions.
Thiel I 267
Already the two original axiom systems are equivalent without the assumption of reciprocal definitions, i.e. they are logically equivalent. This equivalence relation allows an abstraction step to the fine structures. In the previous sense the same structures, are now differentiated: the axiom systems describing them are not immediately logically equivalent, but their concepts prove to be mutually definable.
For example, "vector space" "group" and "body" are designations not for fine structures, but for general abstract structures. However, we cannot say now that an axiom system makes a structure unambiguous. A structure has several structures, not anymore "the" structure.
Thiel I 268
E.g. body: the structure Q has a body structure described by axioms in terms of addition and multiplication. E.g. group: the previous statement also implies that Q is also e.g. a group with respect to the addition. Because the group axioms for addition form part of the body axioms.
Modern mathematics is more interested in the statements about structures than in their carriers. From this point of view, structures which are of the same structure are completely equivalent.
>Indistinguishability.
Thiel: in algebra it is probably the most common to talk of structures. Here, there is often a single set of carriers with several links, which can be regarded as a relation.
Thiel I 269
E.g. relation: sum formation: x + y = z relation: s (x, y, z). In addition to link structures, the subject areas often still carry order structures or topological structures.
Thiel I 270
Bourbaki speaks of a reordering of the total area of mathematics according to "mother structures". In modern mathematics, abstractions, especially structures, are understood as equivalence classes and thus as sets. >N. Bourbaki, >Equivalence classes.

2. Indicate = belong together, i.e. intersect, pass through the point, lie on it.


Berka I
Karel Berka
Lothar Kreiser
Logik Texte Berlin 1983

T I
Chr. Thiel
Philosophie und Mathematik Darmstadt 1995
Bounded Rationality Jolls Parisi I 60
Bounded rationality/Jolls: Many important questions in behavioral law and economics today turn on competing conceptions of bounded rationality. Cf. >Bounded rationality/Simon. Economic analysis: Normative analysis of legal policy tends to be more complex when nonoptimizing decision rules are added to Simon’s original model of nonomniscience. For instance, a legal rule such as New York City’s now-defunct “soda law,” which restricted the sale of sugary drinks in servings above sixteen ounces, might have been an attempt to address the reflexive ordering of supersized sugary drinks simply because they (say) offered reasonable “bang for the buck” on a per-ounce basis—but from a normative standpoint it is difficult to be certain that such reflexive purchasing is truly a “failing” in need of legal “correction.”
Nonomniscience: A simple error in judgment about the caloric content of supersized sugary drinks, by contrast, is amenable both to empirical confirmation - do people entering an eating establishment know approximately how many calories a supersized sugary drink has? - and to legal responses designed simply to reduce the degree of nonomniscience (though of course the costs of any such response must also be considered). For purposes both of analytic clarity and of normative debate, distinguishing between the nonomniscience and nonoptimization aspects of Simon’s bounded rationality is tremendously valuable (...).*
Parisi I 62
Nonoptimization: “Nonoptimization” (...) will refer to decision-making that is not in accordance with the optimizing behavior postulated by expected utility theory. „Satisficing“/Herbert Simon/example: As an (...) illustration of the Simonian notion of an individual “satisficing” rather than choosing the option that is “optimal,” imagine an individual assessing whether a price offered for property (...) is at or above a level considered to be “acceptable.” The individual, Simon writes, “may regard $15,000 as an ‘acceptable’ price, anything over this amount as ‘satisfactory,’ anything less as ‘unsatisfactory’ ” and, accordingly, may accept the first offer received at or above $15,000 regardless of whether such acceptance is “optimal” (Simon, 1955(4), p. 104). >Optimism/Bibas, >Loss aversion/Bibas, >Plea bargain/Bibas, >Non-omniscience/Jolls, >Availability heuristic/Economic theories, >Risk perception/Economic theories.

*Behavioral Economics: Behavioral economics focuses on bounded willpower and bounded self-
interest alongside bounded rationality (Thaler, 1996(1). Bounded rationality has been particularly prominent within behavioral law and economics, however (...). For description of behavioral law and economics work on bounded willpower and bounded self-interest, see Jolls (2007(2), 2011(3)). >Bounded rationality/Simon, >Bounded rationality/economic theories.

1. Thaler, Richard H. (1996). “Doing Economics Without Homo Economicus,” in Steven G. Medema and Warren J. Samuels, eds., Foundations of Research in Economics: How Do Economists Do Economics?, 227–237. Cheltenham: Edward Elgar Publishing.
2. Jolls, Christine (2007). “Behavioral Law and Economics,” available at (previously published in Peter Diamond and Hannu Vartiainen, eds., Behavioral Economics and Its Applications. Princeton, NJ: Princeton University Press).
3. Jolls, Christine (2011). Behavioral Economics and the Law. Boston, MA and Delft: now Publishers.
4. Simon, Herbert A. (1955). “A Behavioral Model of Rational Choice.” Quarterly Journal of Economics 69: 99–118.

Jolls, Christine, „Bounded Rationality, Behavioral Economics, and the Law“. In: Parisi, Francesco (ed) (2017). The Oxford Handbook of Law and Economics. Vol 1: Methodology and Concepts. NY: Oxford University Press.


Parisi I
Francesco Parisi (Ed)
The Oxford Handbook of Law and Economics: Volume 1: Methodology and Concepts New York 2017
Deceptions Carnap VI 227
Deception / Carnap: that the tree over there is only an illusion, does not change the nature of the experience. >Experience. Intentionality/Carnap: is not a relationship of its own.
Constitution theory: the intentioned tree already is a complex order of experiences. Cf. >Constitution/Husserl.
Intentional relationship exists between an experience and the ordering experiences. - But always in a particular field. It does not question the reality. >Reality, >World, >Perception.

Ca I
R. Carnap
Die alte und die neue Logik
In
Wahrheitstheorien, G. Skirbekk (Hg) Frankfurt 1996

Ca II
R. Carnap
Philosophie als logische Syntax
In
Philosophie im 20.Jahrhundert, Bd II, A. Hügli/P.Lübcke (Hg) Reinbek 1993

Ca IV
R. Carnap
Mein Weg in die Philosophie Stuttgart 1992

Ca IX
Rudolf Carnap
Wahrheit und Bewährung. Actes du Congrès International de Philosophie Scientifique fasc. 4, Induction et Probabilité, Paris, 1936
In
Wahrheitstheorien, Gunnar Skirbekk Frankfurt/M. 1977

Ca VI
R. Carnap
Der Logische Aufbau der Welt Hamburg 1998

CA VII = PiS
R. Carnap
Sinn und Synonymität in natürlichen Sprachen
In
Zur Philosophie der idealen Sprache, J. Sinnreich (Hg) München 1982

Ca VIII (= PiS)
R. Carnap
Über einige Begriffe der Pragmatik
In
Zur Philosophie der idealen Sprache, J. Sinnreich (Hg) München 1982

Dimensions Carnap VI 126
Dimension/visual sense/Carnap: the decomposition of the dimensions in a unique constellation was made possible that the two relations of identity of digits (field of view twodim.) and the color identity (threedim in color body) show a formally different behavior, in which various color qualities may occur in the same elematary experience, but not different qualities with identical digits. Deeper reason: two things can not be at the same time in the same place.
Principle of individuation: ultimately determined the ordering of space. >Space, >Order.

Ca I
R. Carnap
Die alte und die neue Logik
In
Wahrheitstheorien, G. Skirbekk (Hg) Frankfurt 1996

Ca II
R. Carnap
Philosophie als logische Syntax
In
Philosophie im 20.Jahrhundert, Bd II, A. Hügli/P.Lübcke (Hg) Reinbek 1993

Ca IV
R. Carnap
Mein Weg in die Philosophie Stuttgart 1992

Ca IX
Rudolf Carnap
Wahrheit und Bewährung. Actes du Congrès International de Philosophie Scientifique fasc. 4, Induction et Probabilité, Paris, 1936
In
Wahrheitstheorien, Gunnar Skirbekk Frankfurt/M. 1977

Ca VI
R. Carnap
Der Logische Aufbau der Welt Hamburg 1998

CA VII = PiS
R. Carnap
Sinn und Synonymität in natürlichen Sprachen
In
Zur Philosophie der idealen Sprache, J. Sinnreich (Hg) München 1982

Ca VIII (= PiS)
R. Carnap
Über einige Begriffe der Pragmatik
In
Zur Philosophie der idealen Sprache, J. Sinnreich (Hg) München 1982

Environment AI Research Norvig I 401
Environment/planning/real world/representation/artificial intelligence/Norvig/Russell: algorithms for planning (…) extend both the representation language and the way the planner interacts with the environment. >Planning/Norvig, >Agents/Norvig. New: [we now have] a) actions with duration and b) plans that are organized hierarchically.
Hierarchy: Hierarchy also lends itself to efficient plan construction because the planner can solve a problem at an abstract level before delving into details
1st approach: “plan first, schedule later”: (…) we divide the overall problem into a planning phase in which actions are selected, with some ordering constraints, to meet the goals of the problem, and a later scheduling phase, in which temporal information is added to the plan to ensure that it meets resource and deadline constraints.
Norvig I 404
Critical path: Mathematically speaking, critical-path problems are easy to solve because they are defined as a conjunction of linear inequalities on the start and end times. When we introduce resource constraints, the resulting constraints on start and end times become more complicated.
Norvig I 405
Scheduling: The “cannot overlap” constraint is a disjunction of two linear inequalities, one for each possible ordering. The introduction of disjunctions turns out to make scheduling with resource constraints NP-hard. >NP-Problems. Non-overlapping: [when we assume non-overlapping] every scheduling problem can be solved by a non-overlapping sequence that avoids all resource conflicts, provided that each action is feasible by itself. If a scheduling problem is proving very difficult, however, it may not be a good idea to solve it this way - it may be better to reconsider the actions and constraints, in case that leads to a much easier scheduling problem. Thus, it makes sense to integrate planning and scheduling by taking into account durations and overlaps during the construction of a partial-order plan.
Heuristics: partial-order planners can detect resource constraint violations in much the same way they detect conflicts with causal links. Heuristics can be devised to estimate the total completion time of a plan. This is currently an active area of research (see below).
Norvig I 406
Real world planning: AI systems will probably have to do what humans appear to do: plan at higher levels of abstraction. A reasonable plan for the Hawaii vacation might be “Go to San Francisco airport (…)” ((s) which might be in a different direction). (…) planning can occur both before and during the execution of the plan (…).
Solution: hierarchical decomposition: hierarchical task networks (HTN).
Norvig I 407
a high-level plan achieves the goal from a given state if at least one of its implementations achieves the goal from that state. The “at least one” in this definition is crucial - not all implementations need to achieve the goal, because the agent gets
Norvig I 408
to decide which implementation it will execute. Thus, the set of possible implementations in HTN planning - each of which may have a different outcome - is not the same as the set of possible outcomes in nondeterministic planning. It can be shown that the right collection of HLAs can result in the time complexity of blind search dropping from exponential in the solution depth to linear in the solution depth, although devising such a collection of HLAs may be a nontrivial task in itself.
Norvig I 409
Plan library: The key to HTN planning, then, is the construction of a plan library containing known methods for implementing complex, high-level actions. One method of constructing the library is to learn the methods from problem-solving experience. (>Representation/AI research, >Learning/AI research). Learning/AI: In this way, the agent can become more and more competent over time as new methods are built on top of old methods. One important aspect of this learning process is the ability to generalize the methods that are constructed, eliminating detail that is specific to the problem instance (…).
Norvig I 410
Nondeterministic action: problem: downward refinement is much too conservative for a real world environment. See >Terminology/Norvig for “demonic nondeterminism” and “angelic nondeterminism”.
Norvig I 411
Reachable sets: The key idea is that the agent can choose which element of the reachable set it ends up in when it executes the HLA; thus, an HLA with multiple refinements is more “powerful” than the same HLA (hig level action) with fewer refinements. The notion of reachable sets yields a straightforward algorithm: search among highlevel plans, looking for one whose reachable set intersects the goal; once that happens, the algorithm can commit to that abstract plan, knowing that it works, and focus on refining the plan further.
Norvig I 415
Unknown environment/planning/nondeterministic domains: [problems here are] sensorless planning (also known as conformant planning) for environments with no observations; contingency planning for partially observable and nondeterministic environments; and online planning and replanning for unknown environments.
Norvig I 417
Sensorless planning: In classical planning, where the closed-world assumption is made, we would assume that any fluent not mentioned in a state is false, but in sensorless (and partially observable) planning we have to switch to an open-world assumption in which states contain both positive and negative fluents, and if a fluent does not appear, its value is unknown. Thus, the belief state corresponds exactly to the set of possible worlds that satisfy the formula.
Norvig I 423
Online replanning: The online agent has a choice of how carefully to monitor the environment. We distinguish three levels: a) Action monitoring: before executing an action, the agent verifies that all the preconditions still hold, b) Plan monitoring: before executing an action, the agent verifies that the remaining plan will still succeed, c) Goal monitoring: before executing an action, the agent checks to see if there is a better set of goals it could be trying to achieve.
Norvig I 425
Multi-agent planning: A multibody problem is still a “standard” single-agent problem as long as the relevant sensor information collected by each body can be pooled - either centrally or within each body - to form a common estimate of the world state that then informs the execution of the overall plan; in this case, the multiple bodies act as a single body. When communication constraints make this impossible, we have
Norvig I 426
what is sometimes called a decentralized planning problem: (…) the subplan constructed for each body may need to include explicit communicative actions with other bodies.
Norvig I 429
Convention: A convention is any constraint on the selection of joint plans. Communication: In the absence of a convention, agents can use communication to achieve common knowledge of a feasible joint plan.
Plan recognition: works when a single action (or short sequence of actions) is enough to determine a joint plan unambiguously. Note that communication can work as well with competitive agents as with cooperative ones.
Norvig I 430
The most difficult multi-agent problems involve both cooperation with members of one’s own team and competition against members of opposing teams, all without centralized control.
Norvig I 431
Time constraints in plans: Planning with time constraints was first dealt with by DEVISER (Vere, 1983(1)). The representation of time in plans was addressed by Allen (1984(2)) and by Dean et al. (1990)(3) in the FORBIN system. NONLIN+ (Tate and Whiter, 1984)(4) and SIPE (Wilkins, 1988(5), 1990(6)) could reason about the allocation of limited resources to various plan steps. Forward state-space search: The two planners SAPA (Do and Kambhampati, 2001)(7) and T4 (Haslum and Geffner, 2001)(8) both used forward state-space search with sophisticated heuristics to handle actions with durations and resources.
Human heuristics: An alternative is to use very expressive action languages, but guide them by human-written domain-specific heuristics, as is done by ASPEN (Fukunaga et al., 1997)(9), HSTS (Jonsson et al., 2000)(10), and IxTeT (Ghallab and Laruelle, 1994)(11).
Norvig I 432
Hybrid planning-and-scheduling systems: ISIS (Fox et al., 1982(12); Fox, 1990(13)) has been used for job shop scheduling at Westinghouse, GARI (Descotte and Latombe, 1985)(14) planned the machining and construction of mechanical parts, FORBIN was used for factory control, and NONLIN+ was used for naval logistics planning. We chose to present planning and scheduling as two separate problems; (Cushing et al., 2007)(15) show that this can lead to incompleteness on certain problems. Scheduling: The literature on scheduling is presented in a classic survey article (Lawler et al., 1993)(16), a recent book (Pinedo, 2008)(17), and an edited handbook (Blazewicz et al., 2007)(18).
Abstraction hierarchy: The ABSTRIPS system (Sacerdoti, 1974)(19) introduced the idea of an abstraction hierarchy, whereby planning at higher levels was permitted to ignore lower-level preconditions of actions in order to derive the general structure of a working plan. Austin Tate’s Ph.D. thesis (1975b) and work by Earl Sacerdoti (1977)(20) developed the basic ideas of HTN planning in its modern form. Many practical planners, including O-PLAN and SIPE, are HTN planners. Yang (1990)(21) discusses properties of actions that make HTN planning efficient. Erol, Hendler, and Nau (1994(22), 1996(23)) present a complete hierarchical decomposition planner as well as a range of complexity results for pure HTN planners. Our presentation of HLAs and angelic semantics is due to Marthi et al. (2007(24), 2008(25)). Kambhampati et al. (1998)(26) have proposed an approach in which decompositions are just another form of plan refinement, similar to the refinements for non-hierarchical partial-order planning.
Explanation-based learning: The technique of explanation-based learning (…) has been applied in several systems as a means of generalizing previously computed plans, including SOAR (Laird et al., 1986)(27) and PRODIGY (Carbonell et al., 1989)(28).
Case-based planning: An alternative approach is to store previously computed plans in their original form and then reuse them to solve new, similar problems by analogy to the original problem. This is the approach taken by the field called case-based planning (Carbonell, 1983(29); Alterman, 1988(30); Hammond, 1989(31)). Kambhampati (1994)(32) argues that case-based planning should be analyzed as a form of refinement planning and provides a formal foundation for case-based partial-order planning.
Norvig I 433
Conformant planning: Goldman and Boddy (1996)(33) introduced the term conformant planning, noting that sensorless plans are often effective even if the agent has sensors. The first moderately efficient conformant planner was Smith and Weld’s (1998)(34) Conformant Graphplan or CGP. Ferraris and Giunchiglia (2000)(35) and Rintanen (1999)(36) independently developed SATPLAN-based conformant planners. Bonet and Geffner (2000)(37) describe a conformant planner based on heuristic search in the space of >belief states (…).
Norvig I 434
Reactive planning: In the mid-1980s, pessimism about the slow run times of planning systems led to the proposal of reflex agents called reactive planning systems (Brooks, 1986(38); Agre and Chapman, 1987)(39). PENGI (Agre and Chapman, 1987)(39) could play a (fully observable) video game by using Boolean circuits combined with a “visual” representation of current goals and the agent’s internal state. Policies: “Universal plans” (Schoppers, 1987(40), 1989(41)) were developed as a lookup table method for reactive planning, but turned out to be a rediscovery of the idea of policies that had long been used in Markov decision processes (…). >Open Universe/AI research).



1. Vere, S. A. (1983). Planning in time: Windows and durations for activities and goals. PAMI, 5, 246-267.
2. Allen, J. F. (1984). Towards a general theory of action and time. AIJ, 23, 123-154.
3. Dean, T., Kanazawa, K., and Shewchuk, J. (1990). Prediction, observation and estimation in planning and control. In 5th IEEE International Symposium on Intelligent Control, Vol. 2, pp. 645-650.
4. Tate, A. and Whiter, A. M. (1984). Planning with multiple resource constraints and an application to a naval planning problem. In Proc. First Conference on AI Applications, pp. 410-416.
5. Wilkins, D. E. (1988). Practical Planning: Extending the AI Planning Paradigm. Morgan Kaufmann.
6. Wilkins, D. E. (1990). Can AI planners solve practical problems? Computational Intelligence, 6(4), 232-246.
7. Do, M. B. and Kambhampati, S. (2003). Planning as constraint satisfaction: solving the planning graph by compiling it into CSP. AIJ, 132(2), 151-182.
8. Haslum, P. and Geffner, H. (2001). Heuristic planning with time and resources. In Proc. IJCAI-01 Workshop on Planning with Resources.
9. Fukunaga, A. S., Rabideau, G., Chien, S., and Yan, D. (1997). ASPEN: A framework for automated planning and scheduling of spacecraft control and operations. In Proc. International Symposium on AI,
Robotics and Automation in Space, pp. 181-187.
10. Jonsson, A., Morris, P., Muscettola, N., Rajan, K., and Smith, B. (2000). Planning in interplanetary space: Theory and practice. In AIPS-00, pp. 177-186.
11. Ghallab, M. and Laruelle, H. (1994). Representation and control in IxTeT, a temporal planner. In AIPS-94, pp. 61-67.
12. Fox, M. S., Allen, B., and Strohm, G. (1982). Job shop scheduling: An investigation in constraint directed reasoning. In AAAI-82, pp. 155-158.
13. Fox, M. S. (1990). Constraint-guided scheduling: A short history of research at CMU. Computers in
Industry, 14(1–3), 79-88
14. Descotte, Y. and Latombe, J.-C. (1985). Making compromises among antagonist constraints in a planner. AIJ, 27, 183–217.
15. Cushing,W., Kambhampati, S.,Mausam, and Weld, D. S. (2007). When is temporal planning really temporal? In IJCAI-07.
16. Lawler, E. L., Lenstra, J. K., Kan, A., and Shmoys, D. B. (1993). Sequencing and scheduling: Algorithms and complexity. In Graves, S. C., Zipkin, P. H., and Kan, A. H. G. R. (Eds.), Logistics of Production and Inventory: Handbooks in Operations Research and Management Science, Volume 4, pp. 445 - 522. North-Holland.
17. Pinedo, M. (2008). Scheduling: Theory, Algorithms, and Systems. Springer Verlag.
18. Blazewicz, J., Ecker, K., Pesch, E., Schmidt, G., and Weglarz, J. (2007). Handbook on Scheduling: Models and Methods for Advanced Planning (International Handbooks on Information Systems). Springer-Verlag New York, Inc.
19. Sacerdoti, E. D. (1974). Planning in a hierarchy of abstraction spaces. AIJ, 5(2), 115–135.
20. Sacerdoti, E. D. (1977). A Structure for Plans and Behavior. Elsevier/North-Holland
21. Yang, Q. (1990). Formalizing planning knowledge for hierarchical planning. Computational Intelligence, 6, 12–24.
22. Erol, K., Hendler, J., and Nau, D. S. (1994). HTN planning: Complexity and expressivity. In AAAI-94,
pp. 1123–1128.
23. Erol, K., Hendler, J., and Nau, D. S. (1996). Complexity results for HTN planning. AIJ, 18(1), 69–93. 24. Marthi, B., Russell, S. J., and Wolfe, J. (2007). Angelic semantics for high-level actions. In ICAPS-07.
25. Marthi, B., Russell, S. J., and Wolfe, J. (2008). Angelic hierarchical planning: Optimal and online algorithms. In ICAPS-08.
26. Kambhampati, S., Mali, A. D., and Srivastava, B. (1998). Hybrid planning for partially hierarchical domains. In AAAI-98, pp. 882–888.
27. Laird, J., Rosenbloom, P. S., and Newell, A. (1986). Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1, 11–46.
28. Carbonell, J. G., Knoblock, C. A., and Minton, S. (1989). PRODIGY: An integrated architecture for planning and learning. Technical report CMU-CS- 89-189, Computer Science Department, Carnegie-
Mellon University.
29. Carbonell, J. G. (1983). Derivational analogy and its role in problem solving. In AAAI-83, pp. 64–69.
30. Alterman, R. (1988). Adaptive planning. Cognitive Science, 12, 393–422.
31. Hammond, K. (1989). Case-Based Planning: Viewing Planning as a Memory Task. Academic Press.
32. Kambhampati, S. (1994). Exploiting causal structure to control retrieval and refitting during plan reuse. Computational Intelligence, 10, 213–244
33. Goldman, R. and Boddy, M. (1996). Expressive planning and explicit knowledge. In AIPS-96, pp. 110–117.
34. Goldman, R. and Boddy, M. (1996). Expressive planning and explicit knowledge. In AIPS-96, pp. 110–117.
35. Smith, D. E. and Weld, D. S. (1998). Conformant Graphplan. In AAAI-98, pp. 889–896.
36. Rintanen, J. (1999). Improvements to the evaluation of quantified Boolean formulae. In IJCAI-99,
pp. 1192–1197.
37. Bonet, B. and Geffner, H. (2005). An algorithm better than AO∗? In AAAI-05. 38. Brooks, R. A. (1986). A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, 2, 14–23.
39. Agre, P. E. and Chapman, D. (1987). Pengi: an implementation of a theory of activity. In IJCAI-87, pp. 268–272.
40. Schoppers, M. J. (1987). Universal plans for reactive robots in unpredictable environments. In IJCAI-
87, pp. 1039–1046.
41. Schoppers, M. J. (1989). In defense of reaction plans as caches. AIMag, 10(4), 51–60.


Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010
Environment Norvig Norvig I 401
Environment/planning/real world/representation/artificial intelligence/Norvig/Russell: algorithms for planning (…) extend both the representation language and the way the planner interacts with the environment. >Planning/Norvig, >Agents/Norvig. New: [we now have] a) actions with duration and b) plans that are organized hierarchically.
Hierarchy: Hierarchy also lends itself to efficient plan construction because the planner can solve a problem at an abstract level before delving into details
1st approach: “plan first, schedule later”: (…) we divide the overall problem into a planning phase in which actions are selected, with some ordering constraints, to meet the goals of the problem, and a later scheduling phase, in which temporal information is added to the plan to ensure that it meets resource and deadline constraints.
Norvig I 404
Critical path: Mathematically speaking, critical-path problems are easy to solve because they are defined as a conjunction of linear inequalities on the start and end times. When we introduce resource constraints, the resulting constraints on start and end times become more complicated.
Norvig I 405
Scheduling: The “cannot overlap” constraint is a disjunction of two linear inequalities, one for each possible ordering. The introduction of disjunctions turns out to make scheduling with resource constraints NP-hard. >NP-Problems. Non-overlapping: [when we assume non-overlapping] every scheduling problem can be solved by a non-overlapping sequence that avoids all resource conflicts, provided that each action is feasible by itself. If a scheduling problem is proving very difficult, however, it may not be a good idea to solve it this way - it may be better to reconsider the actions and constraints, in case that leads to a much easier scheduling problem. Thus, it makes sense to integrate planning and scheduling by taking into account durations and overlaps during the construction of a partial-order plan.
Heuristics: partial-order planners can detect resource constraint violations in much the same way they detect conflicts with causal links. Heuristics can be devised to estimate the total completion time of a plan. This is currently an active area of research (see below).
Norvig I 406
Real world planning: AI systems will probably have to do what humans appear to do: plan at higher levels of abstraction. A reasonable plan for the Hawaii vacation might be “Go to San Francisco airport (…)” ((s) which might be in a different direction). (…) planning can occur both before and during the execution of the plan (…).
Solution: hierarchical decomposition: hierarchical task networks (HTN).
Norvig I 407
a high-level plan achieves the goal from a given state if at least one of its implementations achieves the goal from that state. The “at least one” in this definition is crucial - not all implementations need to achieve the goal, because the agent gets
Norvig I 408
to decide which implementation it will execute. Thus, the set of possible implementations in HTN planning - each of which may have a different outcome - is not the same as the set of possible outcomes in nondeterministic planning. It can be shown that the right collection of HLAs can result in the time complexity of blind search dropping from exponential in the solution depth to linear in the solution depth, although devising such a collection of HLAs may be a nontrivial task in itself.
Norvig I 409
Plan library: The key to HTN planning, then, is the construction of a plan library containing known methods for implementing complex, high-level actions. One method of constructing the library is to learn the methods from problem-solving experience. (>Representation/AI research, >Learning/AI research). Learning/AI: In this way, the agent can become more and more competent over time as new methods are built on top of old methods. One important aspect of this learning process is the ability to generalize the methods that are constructed, eliminating detail that is specific to the problem instance (…).
Norvig I 410
Nondeterministic action: problem: downward refinement is much too conservative for a real world environment. See >Terminology/Norvig for “demonic nondeterminism” and “angelic nondeterminism”.
Norvig I 411
Reachable sets: The key idea is that the agent can choose which element of the reachable set it ends up in when it executes the HLA; thus, an HLA with multiple refinements is more “powerful” than the same HLA (hig level action) with fewer refinements. The notion of reachable sets yields a straightforward algorithm: search among highlevel plans, looking for one whose reachable set intersects the goal; once that happens, the algorithm can commit to that abstract plan, knowing that it works, and focus on refining the plan further.
Norvig I 415
Unknown environment/planning/nondeterministic domains: [problems here are] sensorless planning (also known as conformant planning) for environments with no observations; contingency planning for partially observable and nondeterministic environments; and online planning and replanning for unknown environments.
Norvig I 417
Sensorless planning: In classical planning, where the closed-world assumption is made, we would assume that any fluent not mentioned in a state is false, but in sensorless (and partially observable) planning we have to switch to an open-world assumption in which states contain both positive and negative fluents, and if a fluent does not appear, its value is unknown. Thus, the belief state corresponds exactly to the set of possible worlds that satisfy the formula.
Norvig I 423
Online replanning: The online agent has a choice of how carefully to monitor the environment. We distinguish three levels: a) Action monitoring: before executing an action, the agent verifies that all the preconditions still hold, b) Plan monitoring: before executing an action, the agent verifies that the remaining plan will still succeed, c) Goal monitoring: before executing an action, the agent checks to see if there is a better set of goals it could be trying to achieve.
Norvig I 425
Multi-agent planning: A multibody problem is still a “standard” single-agent problem as long as the relevant sensor information collected by each body can be pooled - either centrally or within each body - to form a common estimate of the world state that then informs the execution of the overall plan; in this case, the multiple bodies act as a single body. When communication constraints make this impossible, we have
Norvig I 426
what is sometimes called a decentralized planning problem: (…) the subplan constructed for each body may need to include explicit communicative actions with other bodies.
Norvig I 429
Convention: A convention is any constraint on the selection of joint plans. Communication: In the absence of a convention, agents can use communication to achieve common knowledge of a feasible joint plan.
Plan recognition: works when a single action (or short sequence of actions) is enough to determine a joint plan unambiguously. Note that communication can work as well with competitive agents as with cooperative ones.
Norvig I 430
The most difficult multi-agent problems involve both cooperation with members of one’s own team and competition against members of opposing teams, all without centralized control.
Norvig I 431
Time constraints in plans: Planning with time constraints was first dealt with by DEVISER (Vere, 1983(1)). The representation of time in plans was addressed by Allen (1984(2)) and by Dean et al. (1990)(3) in the FORBIN system. NONLIN+ (Tate and Whiter, 1984)(4) and SIPE (Wilkins, 1988(5), 1990(6)) could reason about the allocation of limited resources to various plan steps. Forward state-space search: The two planners SAPA (Do and Kambhampati, 2001)(7) and T4 (Haslum and Geffner, 2001)(8) both used forward state-space search with sophisticated heuristics to handle actions with durations and resources.
Human heuristics: An alternative is to use very expressive action languages, but guide them by human-written domain-specific heuristics, as is done by ASPEN (Fukunaga et al., 1997)(9), HSTS (Jonsson et al., 2000)(10), and IxTeT (Ghallab and Laruelle, 1994)(11).
Norvig I 432
Hybrid planning-and-scheduling systems: ISIS (Fox et al., 1982(12); Fox, 1990(13)) has been used for job shop scheduling at Westinghouse, GARI (Descotte and Latombe, 1985)(14) planned the machining and construction of mechanical parts, FORBIN was used for factory control, and NONLIN+ was used for naval logistics planning. We chose to present planning and scheduling as two separate problems; (Cushing et al., 2007)(15) show that this can lead to incompleteness on certain problems. Scheduling: The literature on scheduling is presented in a classic survey article (Lawler et al., 1993)(16), a recent book (Pinedo, 2008)(17), and an edited handbook (Blazewicz et al., 2007)(18).
Abstraction hierarchy: The ABSTRIPS system (Sacerdoti, 1974)(19) introduced the idea of an abstraction hierarchy, whereby planning at higher levels was permitted to ignore lower-level preconditions of actions in order to derive the general structure of a working plan. Austin Tate’s Ph.D. thesis (1975b) and work by Earl Sacerdoti (1977)(20) developed the basic ideas of HTN planning in its modern form. Many practical planners, including O-PLAN and SIPE, are HTN planners. Yang (1990)(21) discusses properties of actions that make HTN planning efficient. Erol, Hendler, and Nau (1994(22), 1996(23)) present a complete hierarchical decomposition planner as well as a range of complexity results for pure HTN planners. Our presentation of HLAs and angelic semantics is due to Marthi et al. (2007(24), 2008(25)). Kambhampati et al. (1998)(26) have proposed an approach in which decompositions are just another form of plan refinement, similar to the refinements for non-hierarchical partial-order planning.
Explanation-based learning: The technique of explanation-based learning (…) has been applied in several systems as a means of generalizing previously computed plans, including SOAR (Laird et al., 1986)(27) and PRODIGY (Carbonell et al., 1989)(28).
Case-based planning: An alternative approach is to store previously computed plans in their original form and then reuse them to solve new, similar problems by analogy to the original problem. This is the approach taken by the field called case-based planning (Carbonell, 1983(29); Alterman, 1988(30); Hammond, 1989(31)). Kambhampati (1994)(32) argues that case-based planning should be analyzed as a form of refinement planning and provides a formal foundation for case-based partial-order planning.
Norvig I 433
Conformant planning: Goldman and Boddy (1996)(33) introduced the term conformant planning, noting that sensorless plans are often effective even if the agent has sensors. The first moderately efficient conformant planner was Smith and Weld’s (1998)(34) Conformant Graphplan or CGP. Ferraris and Giunchiglia (2000)(35) and Rintanen (1999)(36) independently developed SATPLAN-based conformant planners. Bonet and Geffner (2000)(37) describe a conformant planner based on heuristic search in the space of >belief states (…).
Norvig I 434
Reactive planning: In the mid-1980s, pessimism about the slow run times of planning systems led to the proposal of reflex agents called reactive planning systems (Brooks, 1986(38); Agre and Chapman, 1987)(39). PENGI (Agre and Chapman, 1987)(39) could play a (fully observable) video game by using Boolean circuits combined with a “visual” representation of current goals and the agent’s internal state. Policies: “Universal plans” (Schoppers, 1987(40), 1989(41)) were developed as a lookup table method for reactive planning, but turned out to be a rediscovery of the idea of policies that had long been used in Markov decision processes (…). >Open Universe/AI research).



1. Vere, S. A. (1983). Planning in time: Windows and durations for activities and goals. PAMI, 5, 246-267.
2. Allen, J. F. (1984). Towards a general theory of action and time. AIJ, 23, 123-154.
3. Dean, T., Kanazawa, K., and Shewchuk, J. (1990). Prediction, observation and estimation in planning and control. In 5th IEEE International Symposium on Intelligent Control, Vol. 2, pp. 645-650.
4. Tate, A. and Whiter, A. M. (1984). Planning with multiple resource constraints and an application to a naval planning problem. In Proc. First Conference on AI Applications, pp. 410-416.
5. Wilkins, D. E. (1988). Practical Planning: Extending the AI Planning Paradigm. Morgan Kaufmann.
6. Wilkins, D. E. (1990). Can AI planners solve practical problems? Computational Intelligence, 6(4), 232-246.
7. Do, M. B. and Kambhampati, S. (2003). Planning as constraint satisfaction: solving the planning graph by compiling it into CSP. AIJ, 132(2), 151-182.
8. Haslum, P. and Geffner, H. (2001). Heuristic planning with time and resources. In Proc. IJCAI-01 Workshop on Planning with Resources.
9. Fukunaga, A. S., Rabideau, G., Chien, S., and Yan, D. (1997). ASPEN: A framework for automated planning and scheduling of spacecraft control and operations. In Proc. International Symposium on AI,
Robotics and Automation in Space, pp. 181-187.
10. Jonsson, A., Morris, P., Muscettola, N., Rajan, K., and Smith, B. (2000). Planning in interplanetary space: Theory and practice. In AIPS-00, pp. 177-186.
11. Ghallab, M. and Laruelle, H. (1994). Representation and control in IxTeT, a temporal planner. In AIPS-94, pp. 61-67.
12. Fox, M. S., Allen, B., and Strohm, G. (1982). Job shop scheduling: An investigation in constraint directed reasoning. In AAAI-82, pp. 155-158.
13. Fox, M. S. (1990). Constraint-guided scheduling: A short history of research at CMU. Computers in
Industry, 14(1–3), 79-88
14. Descotte, Y. and Latombe, J.-C. (1985). Making compromises among antagonist constraints in a planner. AIJ, 27, 183–217.
15. Cushing,W., Kambhampati, S.,Mausam, and Weld, D. S. (2007). When is temporal planning really temporal? In IJCAI-07.
16. Lawler, E. L., Lenstra, J. K., Kan, A., and Shmoys, D. B. (1993). Sequencing and scheduling: Algorithms and complexity. In Graves, S. C., Zipkin, P. H., and Kan, A. H. G. R. (Eds.), Logistics of Production and Inventory: Handbooks in Operations Research and Management Science, Volume 4, pp. 445 - 522. North-Holland.
17. Pinedo, M. (2008). Scheduling: Theory, Algorithms, and Systems. Springer Verlag.
18. Blazewicz, J., Ecker, K., Pesch, E., Schmidt, G., and Weglarz, J. (2007). Handbook on Scheduling: Models and Methods for Advanced Planning (International Handbooks on Information Systems). Springer-Verlag New York, Inc.
19. Sacerdoti, E. D. (1974). Planning in a hierarchy of abstraction spaces. AIJ, 5(2), 115–135.
20. Sacerdoti, E. D. (1977). A Structure for Plans and Behavior. Elsevier/North-Holland
21. Yang, Q. (1990). Formalizing planning knowledge for hierarchical planning. Computational Intelligence, 6, 12–24.
22. Erol, K., Hendler, J., and Nau, D. S. (1994). HTN planning: Complexity and expressivity. In AAAI-94,
pp. 1123–1128.
23. Erol, K., Hendler, J., and Nau, D. S. (1996). Complexity results for HTN planning. AIJ, 18(1), 69–93. 24. Marthi, B., Russell, S. J., and Wolfe, J. (2007). Angelic semantics for high-level actions. In ICAPS-07.
25. Marthi, B., Russell, S. J., and Wolfe, J. (2008). Angelic hierarchical planning: Optimal and online algorithms. In ICAPS-08.
26. Kambhampati, S., Mali, A. D., and Srivastava, B. (1998). Hybrid planning for partially hierarchical domains. In AAAI-98, pp. 882–888.
27. Laird, J., Rosenbloom, P. S., and Newell, A. (1986). Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1, 11–46.
28. Carbonell, J. G., Knoblock, C. A., and Minton, S. (1989). PRODIGY: An integrated architecture for planning and learning. Technical report CMU-CS- 89-189, Computer Science Department, Carnegie-
Mellon University.
29. Carbonell, J. G. (1983). Derivational analogy and its role in problem solving. In AAAI-83, pp. 64–69.
30. Alterman, R. (1988). Adaptive planning. Cognitive Science, 12, 393–422.
31. Hammond, K. (1989). Case-Based Planning: Viewing Planning as a Memory Task. Academic Press.
32. Kambhampati, S. (1994). Exploiting causal structure to control retrieval and refitting during plan reuse. Computational Intelligence, 10, 213–244
33. Goldman, R. and Boddy, M. (1996). Expressive planning and explicit knowledge. In AIPS-96, pp. 110–117.
34. Goldman, R. and Boddy, M. (1996). Expressive planning and explicit knowledge. In AIPS-96, pp. 110–117.
35. Smith, D. E. and Weld, D. S. (1998). Conformant Graphplan. In AAAI-98, pp. 889–896.
36. Rintanen, J. (1999). Improvements to the evaluation of quantified Boolean formulae. In IJCAI-99,
pp. 1192–1197.
37. Bonet, B. and Geffner, H. (2005). An algorithm better than AO∗? In AAAI-05. 38. Brooks, R. A. (1986). A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, 2, 14–23.
39. Agre, P. E. and Chapman, D. (1987). Pengi: an implementation of a theory of activity. In IJCAI-87, pp. 268–272.
40. Schoppers, M. J. (1987). Universal plans for reactive robots in unpredictable environments. In IJCAI-
87, pp. 1039–1046.
41. Schoppers, M. J. (1989). In defense of reaction plans as caches. AIMag, 10(4), 51–60.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Environment Russell Norvig I 401
Environment/planning/real world/representation/artificial intelligence/Norvig/Russell: algorithms for planning (…) extend both the representation language and the way the planner interacts with the environment. >Planning/Norvig, >Agents/Norvig. New: [we now have] a) actions with duration and b) plans that are organized hierarchically.
Hierarchy: Hierarchy also lends itself to efficient plan construction because the planner can solve a problem at an abstract level before delving into details
1st approach: “plan first, schedule later”: (…) we divide the overall problem into a planning phase in which actions are selected, with some ordering constraints, to meet the goals of the problem, and a later scheduling phase, in which temporal information is added to the plan to ensure that it meets resource and deadline constraints.
Norvig I 404
Critical path: Mathematically speaking, critical-path problems are easy to solve because they are defined as a conjunction of linear inequalities on the start and end times. When we introduce resource constraints, the resulting constraints on start and end times become more complicated.
Norvig I 405
Scheduling: The “cannot overlap” constraint is a disjunction of two linear inequalities, one for each possible ordering. The introduction of disjunctions turns out to make scheduling with resource constraints NP-hard. >NP-Problems. Non-overlapping: [when we assume non-overlapping] every scheduling problem can be solved by a non-overlapping sequence that avoids all resource conflicts, provided that each action is feasible by itself. If a scheduling problem is proving very difficult, however, it may not be a good idea to solve it this way - it may be better to reconsider the actions and constraints, in case that leads to a much easier scheduling problem. Thus, it makes sense to integrate planning and scheduling by taking into account durations and overlaps during the construction of a partial-order plan.
Heuristics: partial-order planners can detect resource constraint violations in much the same way they detect conflicts with causal links. Heuristics can be devised to estimate the total completion time of a plan. This is currently an active area of research (see below).
Norvig I 406
Real world planning: AI systems will probably have to do what humans appear to do: plan at higher levels of abstraction. A reasonable plan for the Hawaii vacation might be “Go to San Francisco airport (…)” ((s) which might be in a different direction). (…) planning can occur both before and during the execution of the plan (…).
Solution: hierarchical decomposition: hierarchical task networks (HTN).
Norvig I 407
a high-level plan achieves the goal from a given state if at least one of its implementations achieves the goal from that state. The “at least one” in this definition is crucial - not all implementations need to achieve the goal, because the agent gets
Norvig I 408
to decide which implementation it will execute. Thus, the set of possible implementations in HTN planning - each of which may have a different outcome - is not the same as the set of possible outcomes in nondeterministic planning. It can be shown that the right collection of HLAs can result in the time complexity of blind search dropping from exponential in the solution depth to linear in the solution depth, although devising such a collection of HLAs may be a nontrivial task in itself.
Norvig I 409
Plan library: The key to HTN planning, then, is the construction of a plan library containing known methods for implementing complex, high-level actions. One method of constructing the library is to learn the methods from problem-solving experience. >Representation/AI research, >Learning/AI research.
Learning/AI: In this way, the agent can become more and more competent over time as new methods are built on top of old methods. One important aspect of this learning process is the ability to generalize the methods that are constructed, eliminating detail that is specific to the problem instance (…).
Norvig I 410
Nondeterministic action: problem: downward refinement is much too conservative for a real world environment. See >Terminology/Norvig for “demonic nondeterminism” and “angelic nondeterminism”.
Norvig I 411
Reachable sets: The key idea is that the agent can choose which element of the reachable set it ends up in when it executes the HLA; thus, an HLA with multiple refinements is more “powerful” than the same HLA (hig level action) with fewer refinements. The notion of reachable sets yields a straightforward algorithm: search among highlevel plans, looking for one whose reachable set intersects the goal; once that happens, the algorithm can commit to that abstract plan, knowing that it works, and focus on refining the plan further.
Norvig I 415
Unknown environment/planning/nondeterministic domains: [problems here are] sensorless planning (also known as conformant planning) for environments with no observations; contingency planning for partially observable and nondeterministic environments; and online planning and replanning for unknown environments.
Norvig I 417
Sensorless planning: In classical planning, where the closed-world assumption is made, we would assume that any fluent not mentioned in a state is false, but in sensorless (and partially observable) planning we have to switch to an open-world assumption in which states contain both positive and negative fluents, and if a fluent does not appear, its value is unknown. Thus, the belief state corresponds exactly to the set of possible worlds that satisfy the formula.
Norvig I 423
Online replanning: The online agent has a choice of how carefully to monitor the environment. We distinguish three levels: a) Action monitoring: before executing an action, the agent verifies that all the preconditions still hold, b) Plan monitoring: before executing an action, the agent verifies that the remaining plan will still succeed, c) Goal monitoring: before executing an action, the agent checks to see if there is a better set of goals it could be trying to achieve.
Norvig I 425
Multi-agent planning: A multibody problem is still a “standard” single-agent problem as long as the relevant sensor information collected by each body can be pooled - either centrally or within each body - to form a common estimate of the world state that then informs the execution of the overall plan; in this case, the multiple bodies act as a single body. When communication constraints make this impossible, we have
Norvig I 426
what is sometimes called a decentralized planning problem: (…) the subplan constructed for each body may need to include explicit communicative actions with other bodies.
Norvig I 429
Convention: A convention is any constraint on the selection of joint plans. Communication: In the absence of a convention, agents can use communication to achieve common knowledge of a feasible joint plan.
Plan recognition: works when a single action (or short sequence of actions) is enough to determine a joint plan unambiguously. Note that communication can work as well with competitive agents as with cooperative ones.
Norvig I 430
The most difficult multi-agent problems involve both cooperation with members of one’s own team and competition against members of opposing teams, all without centralized control.
Norvig I 431
Time constraints in plans: Planning with time constraints was first dealt with by DEVISER (Vere, 1983(1)). The representation of time in plans was addressed by Allen (1984(2)) and by Dean et al. (1990)(3) in the FORBIN system. NONLIN+ (Tate and Whiter, 1984)(4) and SIPE (Wilkins, 1988(5), 1990(6)) could reason about the allocation of limited resources to various plan steps. Forward state-space search: The two planners SAPA (Do and Kambhampati, 2001)(7) and T4 (Haslum and Geffner, 2001)(8) both used forward state-space search with sophisticated heuristics to handle actions with durations and resources.
Human heuristics: An alternative is to use very expressive action languages, but guide them by human-written domain-specific heuristics, as is done by ASPEN (Fukunaga et al., 1997)(9), HSTS (Jonsson et al., 2000)(10), and IxTeT (Ghallab and Laruelle, 1994)(11).
Norvig I 432
Hybrid planning-and-scheduling systems: ISIS (Fox et al., 1982(12); Fox, 1990(13)) has been used for job shop scheduling at Westinghouse, GARI (Descotte and Latombe, 1985)(14) planned the machining and construction of mechanical parts, FORBIN was used for factory control, and NONLIN+ was used for naval logistics planning. We chose to present planning and scheduling as two separate problems; (Cushing et al., 2007)(15) show that this can lead to incompleteness on certain problems. Scheduling: The literature on scheduling is presented in a classic survey article (Lawler et al., 1993)(16), a recent book (Pinedo, 2008)(17), and an edited handbook (Blazewicz et al., 2007)(18).
Abstraction hierarchy: The ABSTRIPS system (Sacerdoti, 1974)(19) introduced the idea of an abstraction hierarchy, whereby planning at higher levels was permitted to ignore lower-level preconditions of actions in order to derive the general structure of a working plan. Austin Tate’s Ph.D. thesis (1975b) and work by Earl Sacerdoti (1977)(20) developed the basic ideas of HTN planning in its modern form. Many practical planners, including O-PLAN and SIPE, are HTN planners. Yang (1990)(21) discusses properties of actions that make HTN planning efficient. Erol, Hendler, and Nau (1994(22), 1996(23)) present a complete hierarchical decomposition planner as well as a range of complexity results for pure HTN planners. Our presentation of HLAs and angelic semantics is due to Marthi et al. (2007(24), 2008(25)). Kambhampati et al. (1998)(26) have proposed an approach in which decompositions are just another form of plan refinement, similar to the refinements for non-hierarchical partial-order planning.
Explanation-based learning: The technique of explanation-based learning (…) has been applied in several systems as a means of generalizing previously computed plans, including SOAR (Laird et al., 1986)(27) and PRODIGY (Carbonell et al., 1989)(28).
Case-based planning: An alternative approach is to store previously computed plans in their original form and then reuse them to solve new, similar problems by analogy to the original problem. This is the approach taken by the field called case-based planning (Carbonell, 1983(29); Alterman, 1988(30); Hammond, 1989(31)). Kambhampati (1994)(32) argues that case-based planning should be analyzed as a form of refinement planning and provides a formal foundation for case-based partial-order planning.
Norvig I 433
Conformant planning: Goldman and Boddy (1996)(33) introduced the term conformant planning, noting that sensorless plans are often effective even if the agent has sensors. The first moderately efficient conformant planner was Smith and Weld’s (1998)(34) Conformant Graphplan or CGP. Ferraris and Giunchiglia (2000)(35) and Rintanen (1999)(36) independently developed SATPLAN-based conformant planners. Bonet and Geffner (2000)(37) describe a conformant planner based on heuristic search in the space of >belief states (…).
Norvig I 434
Reactive planning: In the mid-1980s, pessimism about the slow run times of planning systems led to the proposal of reflex agents called reactive planning systems (Brooks, 1986(38); Agre and Chapman, 1987)(39). PENGI (Agre and Chapman, 1987)(39) could play a (fully observable) video game by using Boolean circuits combined with a “visual” representation of current goals and the agent’s internal state. Policies: “Universal plans” (Schoppers, 1987(40), 1989(41)) were developed as a lookup table method for reactive planning, but turned out to be a rediscovery of the idea of policies that had long been used in Markov decision processes (…).
>Open Universe/AI research).

1. Vere, S. A. (1983). Planning in time: Windows and durations for activities and goals. PAMI, 5, 246-267.
2. Allen, J. F. (1984). Towards a general theory of action and time. AIJ, 23, 123-154.
3. Dean, T., Kanazawa, K., and Shewchuk, J. (1990). Prediction, observation and estimation in planning and control. In 5th IEEE International Symposium on Intelligent Control, Vol. 2, pp. 645-650.
4. Tate, A. and Whiter, A. M. (1984). Planning with multiple resource constraints and an application to a naval planning problem. In Proc. First Conference on AI Applications, pp. 410-416.
5. Wilkins, D. E. (1988). Practical Planning: Extending the AI Planning Paradigm. Morgan Kaufmann.
6. Wilkins, D. E. (1990). Can AI planners solve practical problems? Computational Intelligence, 6(4), 232-246.
7. Do, M. B. and Kambhampati, S. (2003). Planning as constraint satisfaction: solving the planning graph by compiling it into CSP. AIJ, 132(2), 151-182.
8. Haslum, P. and Geffner, H. (2001). Heuristic planning with time and resources. In Proc. IJCAI-01 Workshop on Planning with Resources.
9. Fukunaga, A. S., Rabideau, G., Chien, S., and Yan, D. (1997). ASPEN: A framework for automated planning and scheduling of spacecraft control and operations. In Proc. International Symposium on AI,
Robotics and Automation in Space, pp. 181-187.
10. Jonsson, A., Morris, P., Muscettola, N., Rajan, K., and Smith, B. (2000). Planning in interplanetary space: Theory and practice. In AIPS-00, pp. 177-186.
11. Ghallab, M. and Laruelle, H. (1994). Representation and control in IxTeT, a temporal planner. In AIPS-94, pp. 61-67.
12. Fox, M. S., Allen, B., and Strohm, G. (1982). Job shop scheduling: An investigation in constraint directed reasoning. In AAAI-82, pp. 155-158.
13. Fox, M. S. (1990). Constraint-guided scheduling: A short history of research at CMU. Computers in
Industry, 14(1–3), 79-88
14. Descotte, Y. and Latombe, J.-C. (1985). Making compromises among antagonist constraints in a planner. AIJ, 27, 183–217.
15. Cushing,W., Kambhampati, S.,Mausam, and Weld, D. S. (2007). When is temporal planning really temporal? In IJCAI-07.
16. Lawler, E. L., Lenstra, J. K., Kan, A., and Shmoys, D. B. (1993). Sequencing and scheduling: Algorithms and complexity. In Graves, S. C., Zipkin, P. H., and Kan, A. H. G. R. (Eds.), Logistics of Production and Inventory: Handbooks in Operations Research and Management Science, Volume 4, pp. 445 - 522. North-Holland.
17. Pinedo, M. (2008). Scheduling: Theory, Algorithms, and Systems. Springer Verlag.
18. Blazewicz, J., Ecker, K., Pesch, E., Schmidt, G., and Weglarz, J. (2007). Handbook on Scheduling: Models and Methods for Advanced Planning (International Handbooks on Information Systems). Springer-Verlag New York, Inc.
19. Sacerdoti, E. D. (1974). Planning in a hierarchy of abstraction spaces. AIJ, 5(2), 115–135.
20. Sacerdoti, E. D. (1977). A Structure for Plans and Behavior. Elsevier/North-Holland
21. Yang, Q. (1990). Formalizing planning knowledge for hierarchical planning. Computational Intelligence, 6, 12–24.
22. Erol, K., Hendler, J., and Nau, D. S. (1994). HTN planning: Complexity and expressivity. In AAAI-94,
pp. 1123–1128.
23. Erol, K., Hendler, J., and Nau, D. S. (1996). Complexity results for HTN planning. AIJ, 18(1), 69–93. 24. Marthi, B., Russell, S. J., and Wolfe, J. (2007). Angelic semantics for high-level actions. In ICAPS-07.
25. Marthi, B., Russell, S. J., and Wolfe, J. (2008). Angelic hierarchical planning: Optimal and online algorithms. In ICAPS-08.
26. Kambhampati, S., Mali, A. D., and Srivastava, B. (1998). Hybrid planning for partially hierarchical domains. In AAAI-98, pp. 882–888.
27. Laird, J., Rosenbloom, P. S., and Newell, A. (1986). Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1, 11–46.
28. Carbonell, J. G., Knoblock, C. A., and Minton, S. (1989). PRODIGY: An integrated architecture for planning and learning. Technical report CMU-CS- 89-189, Computer Science Department, Carnegie-
Mellon University.
29. Carbonell, J. G. (1983). Derivational analogy and its role in problem solving. In AAAI-83, pp. 64–69.
30. Alterman, R. (1988). Adaptive planning. Cognitive Science, 12, 393–422.
31. Hammond, K. (1989). Case-Based Planning: Viewing Planning as a Memory Task. Academic Press.
32. Kambhampati, S. (1994). Exploiting causal structure to control retrieval and refitting during plan reuse. Computational Intelligence, 10, 213–244
33. Goldman, R. and Boddy, M. (1996). Expressive planning and explicit knowledge. In AIPS-96, pp. 110–117.
34. Goldman, R. and Boddy, M. (1996). Expressive planning and explicit knowledge. In AIPS-96, pp. 110–117.
35. Smith, D. E. and Weld, D. S. (1998). Conformant Graphplan. In AAAI-98, pp. 889–896.
36. Rintanen, J. (1999). Improvements to the evaluation of quantified Boolean formulae. In IJCAI-99,
pp. 1192–1197.
37. Bonet, B. and Geffner, H. (2005). An algorithm better than AO∗? In AAAI-05. 38. Brooks, R. A. (1986). A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, 2, 14–23.
39. Agre, P. E. and Chapman, D. (1987). Pengi: an implementation of a theory of activity. In IJCAI-87, pp. 268–272.
40. Schoppers, M. J. (1987). Universal plans for reactive robots in unpredictable environments. In IJCAI-
87, pp. 1039–1046.
41. Schoppers, M. J. (1989). In defense of reaction plans as caches. AIMag, 10(4), 51–60.

Russell I
B. Russell/A.N. Whitehead
Principia Mathematica Frankfurt 1986

Russell II
B. Russell
The ABC of Relativity, London 1958, 1969
German Edition:
Das ABC der Relativitätstheorie Frankfurt 1989

Russell IV
B. Russell
The Problems of Philosophy, Oxford 1912
German Edition:
Probleme der Philosophie Frankfurt 1967

Russell VI
B. Russell
"The Philosophy of Logical Atomism", in: B. Russell, Logic and KNowledge, ed. R. Ch. Marsh, London 1956, pp. 200-202
German Edition:
Die Philosophie des logischen Atomismus
In
Eigennamen, U. Wolf (Hg) Frankfurt 1993

Russell VII
B. Russell
On the Nature of Truth and Falsehood, in: B. Russell, The Problems of Philosophy, Oxford 1912 - Dt. "Wahrheit und Falschheit"
In
Wahrheitstheorien, G. Skirbekk (Hg) Frankfurt 1996


Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010
Explanation Goodman IV 165
Explanation: a basic term is not defined, but explained by means of its different varieties. >Definitions.
---
II 67
Reduction sentences/Carnap: if we want to construct a language of science, we must take some descriptive (i.e. not logical) expressions as basic expressions. Other expressions can then be introduced by means of reduction sentences. >Reduction, >Reducibility, >Reductionism.
II 68
GoodmanVsCarnap/reduction sentences: [the whole thing is] pretty absurd (...) in my opinion. Philosophy has the task of explaining the science (and the everyday language), but not describing it. The explanation must refer to the pre-systematic use of the terms under consideration, but does not have to adhere to the ordering. It is all about economy and unification.

G IV
N. Goodman
Catherine Z. Elgin
Reconceptions in Philosophy and Other Arts and Sciences, Indianapolis 1988
German Edition:
Revisionen Frankfurt 1989

Goodman I
N. Goodman
Ways of Worldmaking, Indianapolis/Cambridge 1978
German Edition:
Weisen der Welterzeugung Frankfurt 1984

Goodman II
N. Goodman
Fact, Fiction and Forecast, New York 1982
German Edition:
Tatsache Fiktion Voraussage Frankfurt 1988

Goodman III
N. Goodman
Languages of Art. An Approach to a Theory of Symbols, Indianapolis 1976
German Edition:
Sprachen der Kunst Frankfurt 1997

Generalization Gärdenfors I 126
Generalization/Gärdenfors: Three types of generalization: 1. The hierarchy in the generalization of categories corresponds to the logical relations of the universality of the nouns.
2. Similarity relations between categories
3. According to Rosch (1975, 1978)(1)(2), superior categories contain much less common attributes (domains) than basic categories.
>Categories, >Categorization, >Classification, >Ordering,
>Similarity, >Attributes, >Hierarchies.

1. Rosch, E. (1975). Cognitive representations of semantic categories. Journal of Experimental Psychology: General, 104, 192–233.
2. Rosch, E. (1978). Prototype classification and logical classification: The two systems. In E. Scholnik (Ed.), New trends in cognitive representation: Challenges to Piaget’s theory (pp. 73–86). Hillsdale, NJ: Erlbaum.

Gä I
P. Gärdenfors
The Geometry of Meaning Cambridge 2014

Induced Value Theory Economic Theories Parisi I 82
Induced value theory/Economic theories/Sullivan/Holt: Experimental control over subjects’ preferences is especially important in this abstract and small-scale context. Whether studying supply and demand, bargaining, or various game-theoretic behaviors, it is generally convenient and often necessary for the researcher to know something about subjects’ preference primitives in order to understand the results of the experiment relative to theoretical predictions. The theory of induced valuation is the tool experimental economists use to gain control over subjects’ preferences (see Smith, 1976)(1).* >Preferences.
Put simplistically, the idea is that a human subject with non-satiable preferences for some valuable resource (usually money) can be induced to exhibit nearly any preference ordering ordering in an experiment by varying the shape of an applicable payoff function.
Problems/collateral preferences: (...) experimental economists are not naïve about the sometimes uncontrollable collateral preferences of subjects. Universal experiences such as boredom and effort avoidance, for example, apply in economics experiments the same as anywhere else. To some extent, these preferences may be controlled by increasing the payoffs associated with experimental choices. But preferences over social stigmatization or perceived degradation are more serious obstacles that may not be controllable in many experiments.
Solution: how to account for such uncontrolled preferences is a complicated question best addressed on an application-by-application basis.

* For additional discussion of preference induction, see Friedman and Sunder (1994(2), S 2.3), Davis and Holt (1993(3), p. 24), and Holt (2007(4), pp. 10-11).
>Vernon L. Smith, >Method, >Experiments.

1. Smith, V. L. (1976). “Experimental economics: Induced value theory.” American Economic Review 66(2): 274–279.
2. Friedman, D. and S. Sunder (1994). Experimental Methods: A Primer for Economists. New York: Cambridge University Press.
3. Davis, D. D. and C. A. Holt (1993). Experimental Economics. Princeton, NJ: Princeton University Press.
4. Holt, C. A. (2007). Markets, Games, & Strategic Behavior. Boston, MA: Pearson Education, Inc.

Sullivan, Sean P. and Charles A. Holt. „Experimental Economics and the Law“ In: Parisi, Francesco (ed) (2017). The Oxford Handbook of Law and Economics. Vol 1: Methodology and Concepts. NY: Oxford University Press.


Parisi I
Francesco Parisi (Ed)
The Oxford Handbook of Law and Economics: Volume 1: Methodology and Concepts New York 2017
Law Morris Gaus I 201
Law/Morris: (...) recourse to sanctions and force, (...) does not mean that laws cannot provide reasons or motivate without such sanctions or that they must presuppose them. The law claims authority, and that claim may often be valid. Unless one assumes that norms per se cannot be reasons, then there should be no reason to insist that legal rules must necessarily be backed up with sanctions. But given human nature we should expect them to be an important part of virtually all legal and political orders. >Sanctions/Morris, >Coercion/Morris, >Coercion/Political philosophy, >Command/Hart.
What does it mean to say that law is ultimately backed by sanctions or ultimately a matter of force? The term 'ultimate' is one of the most opaque in philosophy and social theory and should be used with care. In some contexts the term has a clear sense. An authority, for instance, may be ultimate if it is the highest authority. This idea presupposes that authorities constitute an ordering (often a strict ordering), and that the highest authority is the last one in a certain chain or continuum of authorities.
MorrisVs: Even if we were able to find in every legal system a hierarchical ordering of authorities, it is very unlikely that powers generally will be so ordered. That is, it is very unlikely that we can order power relations in this way, so that for any pair of powers one is greater than the other and the set of all powers is an ordering (i.e. transitive). If this is right, it means that the concept of an ultimate power will be ill-defined. This means that it is unclear and likely misleading to
talk of 'ultimate' powers, for there may never be one power that is so placed that it is 'ultimate' or 'final' (see Morris, 1998(1): ch. 8).

1. Morris, Christopher W. (1998) An Essay on the Modern State. Cambridge: Cambridge University Press.

Morris, Christopher W. 2004. „The Modern State“. In: Gaus, Gerald F. & Kukathas, Chandran 2004. Handbook of Political Theory. SAGE Publications


Gaus I
Gerald F. Gaus
Chandran Kukathas
Handbook of Political Theory London 2004
Learning Papert Minsky I 102
Learning/Papert/Minsky: Some of the most crucial steps in mental growth are based not simply on acquiring new skills, but on acquiring new administrative ways to use what one already knows.
[Context: e.g. the problem of how small children judge quantities, as shown by Piaget: Four- and five-year-old children believe that when water is poured from a short wide glass into a tall thin glass that there is more water in the latter.(1)]

Solution/Artificial Intelligence/software agents/Minsky: [we use] middle-level managers (...) [to] form a new, intermediate layer that groups together certain sets of lower-level skills.
Papert/Minsky: Papert's principle suggests that the processes which assemble agents into groups must somehow exploit relationships among the skills of those agents.
Cf. >Software agents, >Knowledge, >Prior knowledge, >Ordering,
>Neural networks, >Artificial neural networks.

1. David Klahr, ”Revisiting Piaget. A Perspective from Studies of Children’s Problem-solving Abilities”, in: Alan M. Slater and Paul C. Quinn (eds.) 2012. Developmental Psychology. Revisiting the Classic Studies. London: Sage Publications


Minsky I
Marvin Minsky
The Society of Mind New York 1985

Minsky II
Marvin Minsky
Semantic Information Processing Cambridge, MA 2003
Legal Entrepreneurship Austrian School Parisi I 283
Legal Entrepreneurship/ Austrian school: Whitman (2002)(1) (…) extends the idea of entrepreneurship to the role played by lawyers and litigants. He examines how legal entrepreneurs discover and exploit opportunities to change legal rules—either the creation of new rules or the reinterpretation of existing ones to benefit themselves and their clients. Harper: Harper (2013)(2) believes that the entrepreneurial approach lays the groundwork for explaining the open-ended and evolving nature of the legal process—it shows how the structure of property rights can undergo continuous endogenous change as a result of entrepreneurial actions within the legal system itself. The most important differentiating factor separating the entrepreneurship of the market process from legal entrepreneurship is the absence of the discipline of monetary profit and loss in the latter case. Although money may change hands in the process of legal entrepreneurship, its outputs may not be valued according to market prices, especially when there is a public-goods quality to the rule at issue. Whether effective feedback mechanisms exist in the contexts is therefore an open question.
Martin: Martin argues that, in such structures, the feedback mechanism in polities is not as tight as feedback in the market mechanism, and therefore ideology plays a greater role in such decision-making (Martin, 2010)(3). Legal entrepreneurship can be coordinating and yet also increase uncertainty and conflicts in society. It all depends on the kind of legal order in operation and the mechanism by which it is generated and maintained.
Rubin/Priest: Rubin (1977)(4) and Priest (1977)(5) originally analyzed how the openly competitive legal process tends to promote economic efficiency. They more recently point out that the common law system has succumbed to interest group pressures and has deviated from producing efficient rules (Tullock, 2005/1980(6); Tullock, 2005/1997(7); Priest, 1991)(8). They argue that litigation efforts by private parties can explain both the common law’s historic tendency to produce efficient rules as well as its more recent evolution away from efficiency in favor of wealth redistribution through the intrusion of strong interest groups into political and legal processes. Zywicki: Zywicki (2003)(9) describes the common law system in the Middle Ages as polycentric. He focuses on three institutional features of the formative years of the common law system. First, courts competed in overlapping jurisdictions and judges competed for litigants. Second, there was a weak rule of precedent instead of the present-day stare decisis rule. And third, legal rules were more default rules, which parties could contract around, instead of mandatory rules. These features are missing in the present-day common law system, which is non-competitive, has strong rules of precedent,
Parisi I 284
precedent, and is dominated by mandatory rules. The efficiency claims pertain to a social system grounded in private ordering where those who are subject to those legal rules select the rules in open competition. Rajagopalan/Wagner : Rajagopalan and Wagner (2013)(9) argue that the inefficiency claims pertaining to the current system of common law rules are a result of the entrepreneurial action within the contemporary system of the “entangled political economy.” The entangled political economy is essentially a “hybrid” of a monocentric state structure interacting with polycentric or private ordering, encouraging “parasitical” entrepreneurship within the legal system (Podemska-Mikluch and Wagner, 2010)(10). Rajagopalan (2015)(11) provides India as a case study to discuss a system of rules incongruent to the economy consequently giving rise to “parasitical” entrepreneurial action and entanglement of economic and legal orders. There is also “political entrepreneurship” within a given constitutional or governance structure that seeks to create coalitions to effect specific legislation or transfers of wealth (rent seeking). Martin and Thomas (2013)(2) describe such political entrepreneurship at different levels of the institutional structure, at the policy level, legislative level, or the constitutional level. These non-market orders determine the precise form that entrepreneurship takes (Boettke and Coyne, 2009(13); and Boettke and Leeson, 2009)(14). Political entrepreneurship may also attempt to change higher-level rules—like property rights systems, constitutional constraints, and so forth—as a means to gain rents and transfers within an economy (Rajagopalan, 2016)(15).

1. Whitman, D. G. (2002). “Legal Entrepreneurship and Institutional Change.” Journal des Economistes et des Etudes Humaines 12(2): 1–11.
2. Harper, D. A. (2013). “Property rights, entrepreneurship and coordination.” Journal of Economic Behavior and Organization 88: 62–77.
3. Martin, A. (2010). “Emergent Politics and the Power of Ideas.” Studies in Emergent Order 3: 212–245.
4. Rubin, P. H. (1977). “Why is the Common Law Efficient?” Journal of Legal Studies 6(1): 51–63.
5. Priest, G. L. (1977). “The Common Law Process and the Selection of Efficient Rules.” Journal of Legal Studies 6(1): 65–77.
6. Tullock, G. (2005/1980). “Trials on Trial: The Pure Theory of Legal Procedure,” in C. Rowley, ed., The Selected Works of Gordon Tullock, Vol. IX. Indianapolis, IN: Liberty Fund.
7. Tullock, G. (2005/1997). “The Case Against the Common Law,” in C. Rowley, ed., The Selected Works of Gordon Tullock, Vol. IX. Indianapolis, IN: Liberty Fund.
8. Zywicki, T. J. (2003). “The Rise and Fall of Efficiency in the Common Law: A Supply Side Analysis.” Northwestern University Law Review 97(4): 1551–1633.
9. Rajagopalan, S. and R. Wagner (2013). “Legal Entrepreneurship within Alternative Systems of Political Economy.” American Journal of Entrepreneurship 6(1): 24–36.
10. Podemska-Mikluch, M. and R. W. Wagner (2010). “Entangled Political Economy and the Two Faces of Entrepreneurship.” Journal of Public Finance and Public Choice 28(2–3): 99–114.
11. Rajagopalan, S. (2015). “Incompatible institutions: socialism versus constitutionalism in India.” Constitutional Political Economy 26(3): 328–355.
12. Martin, A. and D. Thomas (2013). “Two-tiered political entrepreneurship and the congressional committee system.” Public Choice 154(1): 21–37.
13. Boettke, P. J. and Coyne, C. J. (2009). Context matters: Institutions and entrepreneurship. Hanover: MA, Now Publishers Inc.
14. Boettke, P. J., C. J. Coyne, and P. T. Leeson (2008). “Institutional Stickiness and the New Development Economics.” American Journal of Economics and Sociology 67(2): 331–358.
15. Rajagopalan, S. (2016). “Constitutional Change: A public choice analysis,” in Sujit Choudhary, Pratap Bhanu Mehta, and Madhav Khosla, eds., The Oxford Handbook of the Indian Constitution. New York: Oxford University Press, pp 127–142.

Rajagopalan, Shruti and Mario J. Rizzo “Austrian Perspectives on Law and Economics.” In: Parisi, Francesco (ed) (2017). The Oxford Handbook of Law and Economics. Vol 1: Methodology and Concepts. NY: Oxford University.


Parisi I
Francesco Parisi (Ed)
The Oxford Handbook of Law and Economics: Volume 1: Methodology and Concepts New York 2017
Loewenheim Putnam V 54 ff
Loewenheim/reference/PutnamVsTradition: Loewenheim tries to fix the intension und extension of single expressions via the determination of the truth values for whole sentences.
V 56f
PutnamVsOperationalism: e.g. (1) "E and a cat is on the mat." If we re-interpret this with cherries and trees, all truth values remain unchanged. Cat* to mat*:
a) some cats on some mats and some cherries on some trees,
b) ditto, but no cherry on a tree,
c) none of these cases.
Definition cat*: x is a cat* iff. a) and x = cherry, or b) and x = cat or c) and x = cherry. Definition mat*: x = mat* iff. a) and x = tree or b) and x = mat or c) and x = quark.
Ad c) Here all respective sentences become false ((s) "cat* to mat*" is the more comprehensive (disjunctive) statement and therefore true in all worlds a) or b)).
Putnam: cat will be enhanced to cat* by reinterpretation. Then there might be infinitely many reinterpretations of predicates that will always attribute the right truth value. Then we might even hold "impression" constant as the only expression. The reference will be undetermined because of the truth conditions for whole sentences (>Gavagai).
V 58
We can even reinterpret "sees" (as sees*) so that the sentence "Otto sees a cat" and "Otto sees* a cat" have the same truth values in every world.
V 61
Which properties are intrinsic or extrinsic is relative to the decision, which predicates we use as basic concepts, cat or cat*. Properties are not in themselves extrinsic/intrinsic.
V 286ff
Loewenheim/Putnam: theorem: S be a language with predicates F1, F2, ...Fk. I be an interpretation in the sense that each predicate S gets an intension. Then, there will be a second interpretation J that is not concordant with I but will make the same sentences true in every possible world that are made true by I. Proof: W1, W2, ... all be possible worlds in a well-ordering, Ui be the set of possible individuals existing in world Wi. Ri be the set, forming the extension of the predicate Fi in the possible world Wj. The structure [Uj;Rij(i=1,2...k)] is the "intended Model" of S in world Wj relative to I (i.e. Uj is the domain of S in world Wj, and Rij is (with i = 1, 2, ...k) the extension of the predicate Fi in Wj). J be the interpretation of S which attributes to predicate Fi (i=1, 2, ...k) the following intension: the function fi(W), which has the value Pj(Rij) in every possible world Wj. In other words: the extension of Fi in every world Wj under interpretation J is defined as such, that it is Pj(Rij). Because [Uj;Pj(Rij)(i=1,2...k)] is a model for the same set of sentences as [Uj;Rij(i=1,2...k)] (because of the isomorphism), in every possible world the same sentences are true under J as under I. J is distinguished from I in every world, in which at least one predicate has got a non-trivial extension.
V 66
Loewenheim/intention/meaning/Putnam: this is no solution, because to have intentions presupposes the ability to refer to things. Intention/mind State: is ambiguous: "pure": is e.g. pain, "impure": means e.g. whether I know that snow is white does not depend on me like pain (> twin earth). Non-bracketed belief presupposes that there really is water (twin earth). Intentions are no mental events that evoke the reference.
V 70
Reference/Loewenheim/PutnamVsField: a rule like "x prefers to y iff. x is in relation R to y" does not help: even when we know that it is true, could relation R be any kind of a relation (while Field assumes that it is physical). ---
I (d) 102ff
E.g. the sentence: (1) ~(ER)(R is 1:1. The domain is R < N. The range of R is S). Problem: when we replace S by the set of real numbers (in our favourite set theory), then (1) will be a theorem. In the following our set theory will say that a certain set ("S") is not countable. Then S must in all models of our set theory (e.g. Zermelo-Fraenkel, ZF) be non-countable. Loewenheim: his sentence now tells us, that there is no theory with only uncountable models. This is a contradiction. But this is not the real antinomy. Solution: (1) "tells us" that S is non-countable only, if the quantifier (ER) is interpreted in such a way that is goes over all relations of N x S.
I (d) 103
But if we choose a countable model for the language of our set theory, then "(ER)" will not go over all relations but only over the relations in the model. Then (1) tells us only, that S is uncountable in a relative sense of uncountable. "Finite"/"Infinite" are then relative within an axiomatic set theory. Problem: "unintended" models, that should be uncountable will "in reality" be countable.
Skolem shows, that the whole use of our language (i.e. theoretical and operational conditions) will not determine the "uniquely intended interpretation". Solution: platonism: postulates "magical reference". Realism: offers no solution.
I (d) 105
In the end the sentences of set theory have no fixed truth value.
I (d) 116
Solution: thesis: we have to define interpretation in another way than by models.

Putnam I
Hilary Putnam
Von einem Realistischen Standpunkt
In
Von einem realistischen Standpunkt, Vincent C. Müller Frankfurt 1993

Putnam I (a)
Hilary Putnam
Explanation and Reference, In: Glenn Pearce & Patrick Maynard (eds.), Conceptual Change. D. Reidel. pp. 196--214 (1973)
In
Von einem realistischen Standpunkt, Vincent C. Müller Reinbek 1993

Putnam I (b)
Hilary Putnam
Language and Reality, in: Mind, Language and Reality: Philosophical Papers, Volume 2. Cambridge University Press. pp. 272-90 (1995
In
Von einem realistischen Standpunkt, Vincent C. Müller Reinbek 1993

Putnam I (c)
Hilary Putnam
What is Realism? in: Proceedings of the Aristotelian Society 76 (1975):pp. 177 - 194.
In
Von einem realistischen Standpunkt, Vincent C. Müller Reinbek 1993

Putnam I (d)
Hilary Putnam
Models and Reality, Journal of Symbolic Logic 45 (3), 1980:pp. 464-482.
In
Von einem realistischen Standpunkt, Vincent C. Müller Reinbek 1993

Putnam I (e)
Hilary Putnam
Reference and Truth
In
Von einem realistischen Standpunkt, Vincent C. Müller Reinbek 1993

Putnam I (f)
Hilary Putnam
How to Be an Internal Realist and a Transcendental Idealist (at the Same Time) in: R. Haller/W. Grassl (eds): Sprache, Logik und Philosophie, Akten des 4. Internationalen Wittgenstein-Symposiums, 1979
In
Von einem realistischen Standpunkt, Vincent C. Müller Reinbek 1993

Putnam I (g)
Hilary Putnam
Why there isn’t a ready-made world, Synthese 51 (2):205--228 (1982)
In
Von einem realistischen Standpunkt, Vincent C. Müller Reinbek 1993

Putnam I (h)
Hilary Putnam
Pourqui les Philosophes? in: A: Jacob (ed.) L’Encyclopédie PHilosophieque Universelle, Paris 1986
In
Von einem realistischen Standpunkt, Vincent C. Müller Reinbek 1993

Putnam I (i)
Hilary Putnam
Realism with a Human Face, Cambridge/MA 1990
In
Von einem realistischen Standpunkt, Vincent C. Müller Reinbek 1993

Putnam I (k)
Hilary Putnam
"Irrealism and Deconstruction", 6. Giford Lecture, St. Andrews 1990, in: H. Putnam, Renewing Philosophy (The Gifford Lectures), Cambridge/MA 1992, pp. 108-133
In
Von einem realistischen Standpunkt, Vincent C. Müller Reinbek 1993

Putnam II
Hilary Putnam
Representation and Reality, Cambridge/MA 1988
German Edition:
Repräsentation und Realität Frankfurt 1999

Putnam III
Hilary Putnam
Renewing Philosophy (The Gifford Lectures), Cambridge/MA 1992
German Edition:
Für eine Erneuerung der Philosophie Stuttgart 1997

Putnam IV
Hilary Putnam
"Minds and Machines", in: Sidney Hook (ed.) Dimensions of Mind, New York 1960, pp. 138-164
In
Künstliche Intelligenz, Walther Ch. Zimmerli/Stefan Wolf Stuttgart 1994

Putnam V
Hilary Putnam
Reason, Truth and History, Cambridge/MA 1981
German Edition:
Vernunft, Wahrheit und Geschichte Frankfurt 1990

Putnam VI
Hilary Putnam
"Realism and Reason", Proceedings of the American Philosophical Association (1976) pp. 483-98
In
Truth and Meaning, Paul Horwich Aldershot 1994

Putnam VII
Hilary Putnam
"A Defense of Internal Realism" in: James Conant (ed.)Realism with a Human Face, Cambridge/MA 1990 pp. 30-43
In
Theories of Truth, Paul Horwich Aldershot 1994

SocPut I
Robert D. Putnam
Bowling Alone: The Collapse and Revival of American Community New York 2000

Marshall, Alfred Sraffa Kurz I 104
Alfred Marshall/SraffaVsMarshall/SraffaVsKeynes/Sraffa/Kurz: [While Sraffa ciritizied Marshall], Keynes and with him most Cambridge economists clung to Marshallian concepts, making use, in particular, of the Marshallian demand-and-supply apparatus. Seen from Sraffa’s point of view, this meant that their analyses were flawed. A careful scrutiny would invariably bring the flaws into the open. As regards Keynes’s contributions, Sraffa’s criticism concerned especially the following: 1. The idea expressed in the Treatise(1) that the price level of consumption goods and that of investment goods can be considered as determined independently of one another, and the related idea that the price level of the latter is determined exclusively by the propensity of the public to “hoard” money.
2. The “marginal efficiency of capital” schedule in the General Theory, which carried over the concept of a given order of fertility of different qualities of land to the ordering of investment projects.
3. The view that the banking system can control the money supply and that therefore the quantity of money in the system can be considered exogenous.
4. The argument put forward by Keynes to substantiate his view that the liquidity preference of the public prevents the money rate of interest from falling to a level compatible with a volume of investment equal to full employment savings.
>Alfred Marshall, >Demand, >Supply.
Kurz I 105
While Keynes focused on the problem of money and output as a whole, Sraffa focused on the problem of value and distribution. >Value, >Distribution/Sraffa, >Distribution/Leontief.

1. Piero Sraffa: The Man and the Scholar, London: Routledge. Marcuzzo, C. (2002). “The Collaboration between J. M. Keynes and R. F. Kahn from the Treatise to the General Theory,” History of Political Economy, 34:2, 421-447.

Kurz, Heinz D. „Keynes, Sraffa, and the latter’s “secret skepticism“. In: Kurz, Heinz; Salvadori, Neri 2015. Revisiting Classical Economics: Studies in Long-Period Analysis (Routledge Studies in the History of Economics). London, UK: Routledge.

Sraffa I
Piero Sraffa
Production of Commodities by Means of Commodities. Prelude to a Critique of Economic Theory (Cambridge: Cambridge University Press). Cambridge 1960


Kurz I
Heinz D. Kurz
Neri Salvadori
Revisiting Classical Economics: Studies in Long-Period Analysis (Routledge Studies in the History of Economics). Routledge. London 2015
Money Aristotle Mause I 28
Money/Aristoteles: Aristotle's view of the money value is surprisingly modern: It is not attributed to the material or intrinsic value of money, but to the mere agreement of the money users, i.e. the general acceptance of a certain medium as a means of exchange. >Exchange, >Trade.


Höffe I 56
Money/Aristotle/Höffe: While for Plato money was suspect, no more than a necessary evil, Aristotle, in the context of ordering justice, presents the first theory of money ever written in Europe. He describes its nature and function with astonishing clarity: Exchange: By making highly different goods and services comparable, money enables a society based on the division of labour to carry out the various exchange processes
Marx: "The genius of Aristotle", Marx still acknowledges, "shines precisely in the fact that he discovers a relationship of equality in the value expression of goods.
MarxVsAristotle: Only the historical barrier of the society in which he lived prevents him from finding out what "in truth" this relationship of equality consists of".(1)
Use value/Aristoteles: Höffe: The fact that Aristotle, unlike Marx, did not orientate himself on human work, but on the use and need value, can also be understood as an (perhaps even more modern) alternative.
>Value theory, >K. Marx.

1 K. Marx, Das Kapital, Book I, Chapter 1.3


Mause I
Karsten Mause
Christian Müller
Klaus Schubert,
Politik und Wirtschaft: Ein integratives Kompendium Wiesbaden 2018

Höffe I
Otfried Höffe
Geschichte des politischen Denkens München 2016
Morality Developmental Psychology Upton I 124
Morality/Developmental psychology/Upton: While Piaget distinguishes between heteronomous and autonomous morality (>Morality/Piaget), Kohlberg (1958)(1) speaks of three stages of development of moral thinking: pre-conventional, conventional and post-conventional morality. >Morality/Kohlberg.
Post-conventional Morality/Kohlberg: Kohlberg (1958(1) suggested that most adolescents reach level II [conventional morality] and most of us stay at this level of reasoning during adulthood. Only a few individuals reach the post-conventional level of reasoning; indeed, Kohlberg found stage 6 to be so rare that it has since been removed from the theory.
VsKohlberg: Evidence supports the view that children and adolescents progress through the stages Kohlberg suggested, even if they may not reach the level of post-conventional reasoning
(Flavell et al., 1993(2); Walker, 1989(3)). Cross-cultural studies also provide some evidence for the universality of Kohlberg’s first four stages (Snarey et al., 1985)(4). However, this theory is not without its critics and Kohlberg’s model has been accused of both cultural and gender biases.
>Cultural differences.
Cultural psychologyVsKohlberg: It has been suggested that Kohlberg’s theory is culturally biased because it emphasizes ideals such as individual rights and social justice, which are found mainly in Western cultures (Shweder, 1994)(5).
Miller and Bersoff (1992)(6) showed that Americans placed greater value on a justice orientation (stage 4) than Indians. In contrast, Indians placed a greater weight on interpersonal responsibilities, such as upholding one’s obligations to others and being responsive to other people’s needs (stage 3). In the same way, it has been noted that women are more likely to use stage 3 than stage 4 reasoning.
Gender studiesVsKohlberg: According to Gilligan (1982(7), 1996(8)), the ordering of the stages therefore reflects a gender bias. Placing abstract principles of justice (stage 4) above relationships and concern for others (stage 3) is based on a male norm and reflects the fact that most of Kohlberg’s research used male participants. Gilligan therefore argues that these orientations are indeed different, but that one is not necessarily better than the other.
However, there is some debate about the extent of the evidence to support Gilligan’s claims of gender differences in moral reasoning; a meta-analysis of the evidence by Jaffee and Hyde (2000)(9) found that gender differences in reasoning were small and usually better explained by the nature of the dilemma than by gender. The evidence now seems to suggest that care-based reasoning is used by both males and females to evaluate interpersonal dilemmas, while justice reasoning is applied to societal dilemmas.
>Sex differences, >Justice.

1. Kohlberg, L (1958) The development of modes of moral thinking and choice in the years 10 to
16. Unpublished doctoral thesis, University of Chicago.
2. Flavell, JH, Miller, PH and Miller, SA (1993) Cognitive Development(3rd edn). Englewood Cliffs,
NJ: Prentice Hall.
3. Walker, U (1989) A longitudinal study of moral reasoning. Child Development, 60: 157-66.
4. Snarey, JR, Reimer, J and Kohlberg, L (1985) The development of social-moral reasoning among kibbutz adolescents: a longitudinal cross-cultural study. Developmental Psychology, 20:3-17.
5. Shweder, RA and Levine, RA (eds)(1994) Culture Theory: Essays on mind, self and emotion.
Cambridge: Cambridge University Press.
6. Miller, JG and Bersoff, DM (1992) Culture and moral judgment: how are conflicts between justice and interpersonal responsibilities resolved? Journal of Personality and Social Psychol0ogy, 62(4): 541-54.
7. Gilligan, C (1982) In a D4fferent Voice: Psychological theory and women’s development. Cambridge, MA: Harvard University Press.
8. Gilligan. C (1996) The centrality of relationships in psychological development: a puzzle, some evidence and a theory, in Noam, GG and Fischer, KW (eds) Development and Vulnerability in Close Relationships. Hillsdale, NJ: Lawrence Erlbaum.
9. Jaffee, S and Hyde,JS (2000) Gender differences in moral orientation: a meta-analysis. Psychological Bulletin, 126: 703-2 6.


Upton I
Penney Upton
Developmental Psychology 2011
Morality Gender Studies Upton I 124
Morality/Gender Studies/Upton: Gender studiesVsKohlberg: According to Gilligan (1982(1), 1996(2)), the ordering of the stages therefore reflects a gender bias. Placing abstract principles of justice (stage 4) above relationships and concern for others (stage 3) is based on a male norm and reflects the fact that most of Kohlberg’s research used male participants. Gilligan therefore argues that these orientations are indeed different, but that one is not necessarily better than the other. However, there is some debate about the extent of the evidence to support Gilligan’s claims of gender differences in moral reasoning; a meta-analysis of the evidence by Jaffee and Hyde (2000)(3) found that gender differences in reasoning were small and usually better explained by the nature of the dilemma than by gender. The evidence now seems to suggest that care-based reasoning is used by both males and females to evaluate interpersonal dilemmas, while justice reasoning is applied to societal dilemmas. >Morality/Kohlberg, >Morality/Cultural psychology.


1. Gilligan, C (1982) In a D4fferent Voice: Psychological theory and women’s development. Cambridge, MA: Harvard University Press.
2. Gilligan. C (1996) The centrality of relationships in psychological development: a puzzle, some evidence and a theory, in Noam, GG and Fischer, KW (eds) Development and Vulnerability in Close Relationships. Hillsdale, NJ: Lawrence Erlbaum.
3. Jaffee, S and Hyde,JS (2000) Gender differences in moral orientation: a meta-analysis. Psychological Bulletin, 126: 703-2 6.


Upton I
Penney Upton
Developmental Psychology 2011
Order Order, philosophy: order is the division of a subject area by distinctions or the highlighting of certain differences as opposed to other differences. The resulting order can be one-dimensional or multi-dimensional, i.e. linear or spatial. Examples are family trees, lexicons, lists, alphabets. It may be that only an order makes certain characteristics visible, e.g. contour lines. Ordering spaces may be more than three-dimensional, e.g. in the attribution of temperatures to color-determined objects. See also conceptual space, hierarchies, distinctness, indistinguishability, stratification, identification, individuation, specification.

Order Saussure I ~ 31
Def Symbolic order/Saussure: in a symbolic ordering the meaning is only settled by the subject. >Meaning, >Speaker meaning, >Speaker intention, >Symbols.
Contrast: would be a natural element order.
F. de Saussure
I Peter Prechtl Saussure zur Einführung Hamburg 1994 (Junius)
Personality Traits Developmental Psychology Corr I 192
Personality traits/developmental psychology/Donnellan/Robins: we emphasize that the potential neurobiological bases of the Big Five in no way precludes the possibility that personality traits are affected by life experiences and change over time. >Five-factor model, >Personality, >Agreeableness, >Openness, >Extraversion, >Neuroticism.
Corr I 193
How stable is personality? There is no simple answer to these types of questions because there are different ways of conceptualizing and measuring stability and change (e.g., Caspi and Shiner 2006(1); Roberts and Pomerantz 2004(2)). 2004). The broadest distinction is between homotypic and heterotypic stability (or continuity).
A. Homotypic stability refers to the stability of the exact same thoughts, feelings and behaviours across time.
B. Heterotypic stability refers to the stability of personality traits that are theorized to have different manifestations at different ages. Heterotypic stability can only be understood with reference to a theory that specifies how the same trait ‘looks’ (i.e., manifests itself) at different ages and it broadly refers to the degree of personality coherence across development.
What is the evidence for heterotypic continuity? Longitudinal studies covering long periods of the lifespan provide important evidence of personality coherence. For example, Caspi, Moffitt, Newman and Silva (1996)(3) found that children who were rated as being irritable and impulsive by clinical examiners at age three were more likely to be dependent on alcohol and to have been convicted of a violent crime by age twenty-one.
Corr I 193
The superficial manifestations of self-control are likely to be quite different in pre-schoolers and adolescents; however, the underlying psychological characteristic of being able to forgo immediate impulses to obtain desired long-term outcomes seems to have an appreciable degree of consistency across development. Homotypic stability concerns the evaluation of different kinds of change using the exact same measure of personality across time or across age groups. Four types of stability and change are typically examined: (a) absolute stability (i.e., mean-level stability), (b) differential stability (i.e., rank-order consistency), (c) structural stability, and (d) ipsative stability.
Corr I 194
b) Differential stability reflects the degree to which the relative ordering of individuals on a given trait is consistent over time. For example, a population could increase substantially on a trait but the rank ordering of individuals would be maintained if everyone increased by exactly the same amount. Conversely, the rank ordering of individuals could change substantially over time but without any aggregate increases or decreases (e.g., if the number of people who decreased offset the number of people who increased). c) Structural stability refers to similarity over time in patterns of co-variation among traits, or items on a personality scale. For example, one can use structural equation modelling techniques to test whether the intercorrelations among the Big Five domains are the same at the beginning versus the end of college (Robins, Fraley, Roberts and Trzesniewski 2001)(4). Likewise, investigations of structural stability often include the testing of measurement invariance (e.g., Allemand, Zimprich and Hertzog 2007)(5).
d) Ipsative stability refers to continuity in the patterning of personality characteristics within a person and how well the relative salience (or extremity) of these attributes is preserved over time. For example, a researcher might investigate the degree to which an individual’s Big Five profile is stable over time – if an individual’s cardinal (i.e., most characteristic) trait in adolescence is Openness to Experience,
Corr I 195
is this also likely to be true in adulthood? Examinations of these kinds of questions are fairly rare and often use methods that quantify the similarity of personality profiles such as within-person correlation coefficients (e.g., Ozer and Gjerde 1989)(6). >Five-Factor Model/Developmental psychology.

1. Caspi, A. and Shiner, R. L. 2006. Personality development, in W. Damon and R. Lerner (Series eds.) and N. Eisenberg (Vol. ed.), Handbook of child psychology, vol. III, Social, emotional, and personality development, 6th edn, pp. 300–65. Hoboken, NJ: Wiley
2. Roberts, B. W. and Pomerantz, E. M. 2004. On traits, situations, and their integration: a developmental perspective, Personality and Social Psychology Review 8: 402–16
3. Caspi, A., Moffitt, T. E., Newman, D. L. and Silva, P. A. 1996. Behavioural observations at age 3 years predict adult psychiatric disorders, Archives of General Psychiatry 53: 1033–9
4. Robins, R. W., Fraley, R. C., Roberts, B. W. and Trzesniewski, K. H. 2001. A longitudinal study of personality change in young adulthood, Journal of Personality 69: 617–40
5. Allemand, M., Zimprich, D. and Hertzog, C. 2007. Cross-sectional age differences and longitudinal age changes of personality in middle adulthood and old age, Journal of Personality 75: 323–58
6. Ozer, D. J. and Gjerde, P. F. 1989. Patterns of personality consistency and change from childhood through adolescence, Journal of Personality 57: 483–507

M. Brent Donnellan and Richard W. Robins, “The development of personality across the lifespan”, in: Corr, Ph. J. & Matthews, G. (eds.) 2009. The Cambridge Handbook of Personality Psychology. New York: Cambridge University Press


Corr I
Philip J. Corr
Gerald Matthews
The Cambridge Handbook of Personality Psychology New York 2009

Corr II
Philip J. Corr (Ed.)
Personality and Individual Differences - Revisiting the classical studies Singapore, Washington DC, Melbourne 2018
Phonemes Lyons I 27
Consonant Shift/Rasmus Rask/Lyons: between Indo-European languages: Example f where Latin or Greek had p, e.g. p instead of b, e.g. th instead of t.
I 66
Loud/Language/Realization/Arbitrariness/Lyons: as long as the differences remain, nothing changes if a language would be realized phonetically or graphically differently. N.B.: any word that is differentiated under the normal conventions of English will also be differentiated under the new conventions. The language itself is not affected by the change of substantial realisation.
>Distinctions, >Ordering, >Classification, >Word classes.
I 67
Phoneme/Sound/Writing/Language/Lyons: the phonic substance has priority. There are limits to the pronunciation and audibility of certain sound groups. >Terminology/Lyons.
I 102
Sound/Linguistics/Lyons: is ambiguous: a) As physically different, without knowing which language they belong to. (phonetic, phonetics)
>Phonetics.
b) functionally differentiating within a language. (functional meaning). This is about the purpose of communication. (phonology, phonological).
>Phonology, >Function/Lyons.
This also leads to the distinction between speech sound and phoneme.
Def Phonology/Linguistics/Lyons: concerns the functional side of sound differentiations (purpose of communication, sound differences within a language, not physically understood).
Def Phonetics/Linguistics/Lyons: here it concerns purely physically detectable or producible differences of sounds, independently of a language. Independent of possible communication.
Def Speech sound/linguistics/Lyons: is any phonetically (physically) unique sound unit. There are practically infinitely many different speech sounds.
I 103
There are "wide" and "narrow" transcriptions here and intermediate stages. E.g. English: brighter and darker L-sound: bright. in front of vowels: Example "leaf"
Dark: at the end and in front of consonants: Example "field".
Def Phoneme/Linguistics/Lyons: is the sound, if it is used functionally (not purely physically) to distinguish between different words.
>Description levels.
I 104
Def Allophone/Linguistics/Lyons: phonetically distinguishable sound pairs as position variants of the same phoneme. Sound: Unit of phonetic (physical) description. (phonetics).
Phoneme: Unit of the phonological ((s) meaning-differentiating) description. (phonology).
Phonetics: there are acoustic, auditory and articulatory phonetics.
I 120
Syntagmatic/Phoneme/Lyons: "horizontal" dimension.
I 121
Between phonemes it describes the combinability. This is the set of possible words that goes beyond the "real" words.
I 124
Phonemes/Distinction/Feature/Linguistics/Lyons: a) articulatory features: (labial, velar, dental, voiced, nasal) here it is a question of presence or absence (0, 1). b) Distinctive features: this is about the difference they make by distinguishing different words from each other. Not all distinguishable features lead to a distinction between words. ((s) Some words can be pronounced differently).
Correspondingly, there are "functional" and "non-functional" values.
I 126
Advantage: in this way we can simplify restrictions in the distribution of certain phoneme classes. For example, there are many English words that start with /sp/, /sk/ or /St/, but none that begin with /sb/, /sg/ or /sd/. Certainly this is not a coincidental coincidence of the combinatorial properties of /p/, /k/ and /t/ on the one hand and /b/, /g/ and /d/ on the other. Here we do not have to describe six independent facts, but only one: "In the context of /s-/ the distinction voice/voiceless is not functional". >Function/Lyons.

Ly II
John Lyons
Semantics Cambridge, MA 1977

Lyons I
John Lyons
Introduction to Theoretical Lingustics, Cambridge/MA 1968
German Edition:
Einführung in die moderne Linguistik München 1995

Planning Norvig Norvig I 156
Planning/artificial intelligence/Norvig/Russell: The unpredictability and partial observability of real environments were recognized early on in robotics projects that used planning techniques, including Shakey (Fikes et al., 1972)(1) and (Michie, 1974)(2). The problems received more attention after the publication of McDermott’s (1978a) influential article, Planning and Acting(3). >Belief states/Norvig.
Norvig I 366
Problems: [a sinple] problem-solving agent (…) can find sequences of actions that result in a goal state. But it deals with atomic representations of states and thus needs good domain-specific heuristics to perform well. [A] hybrid propositional logical agent (…) can find plans without domain-specific heuristics because it uses domain-independent heuristics based on the logical structure of the problem. But it relies on ground (variable-free) propositional inference, which means that it may be swamped when there are many actions and states.
Norvig I 367
planning researchers have settled on a factored representation - one in which a state of the world is represented by a collection of variables. We use a language called PDDL, the Planning Domain Definition Language, that allows us to express all 4Tn2 actions with one action schema. Each state is represented as a conjunction of fluents that are ground, functionless atoms. Database semantics is used: the closed-world assumption means that any fluents that are not mentioned are false, and the unique names assumption means that [x] 1 and [x] 2 are distinct. Actions are described by a set of action schemas that implicitly define the ACTIONS(s) and RESULT(s, a) functions needed to do a problem-solving search. >Frame Problem. Classical planning concentrates on problems where most actions leave most things unchanged.
Actions: A set of ground (variable-free) actions can be represented by a single action schema.
The schema is a lifted representation—it lifts the level of reasoning from propositional logic to a restricted subset of first-order logic.
Action schema: The schema consists of the action name, a list of all the variables used in the schema, a precondition and an effect.
Norvig I 367
Forward/backward (progression/regression) state-space search: Cf. >Forward chaining, >backward chaining.
Norvig I 376
Heuristics for planning: a heuristic function h(s) estimates the distance from a state s to the goal and that if we can derive an admissible heuristic for this distance - one that does not overestimate - then we can use A∗ search to find optimal solutions. Representation: Planning uses a factored representation for states and action schemas. That makes it possible to define good domain-independent heuristics and for programs to automatically apply a good domain-independent heuristic for a given problem. Think of a search problem as a graph where the nodes are states and the edges are actions. The problem is to find a path connecting the initial state to a goal state. There are two ways we can relax this problem to make it easier: by adding more edges to the graph, making it strictly easier to find a path, or by grouping multiple nodes together, forming an abstraction of the state space that has fewer states, and thus is easier to search.
Norvig I 377
State abstraction: Many planning problems have 10100 states or more, and relaxing the actions does nothing to reduce the number of states. Therefore, we now look at relaxations that decrease the number of states by forming a state abstraction - a many-to-one mapping from states in the ground representation of the problem to the abstract representation. The easiest form of state abstraction is to ignore some fluents.
Norvig I 378
Heuristics: A key idea in defining heuristics is decomposition: dividing a problem into parts, solving each part independently, and then combining the parts. The subgoal independence assumption is that the cost of solving a conjunction of subgoals is approximated by the sum of the costs of solving each subgoal independently.
Norvig I 390
Planning as constraint satisfaction: >Constraint satisfaction problems.
Norvig I 393
History of AI planning: AI planning arose from investigations into state-space search, theorem proving, and control theory and from the practical needs of robotics, scheduling, and other domains. STRIPS (Fikes and Nilsson, 1971)(4), the first major planning system, illustrates the interaction of these influences.
General problem solver/GPS: the General Problem Solver (Newell and Simon, 1961)(5), [was] a state-space search system that used means–ends analysis. The control structure of STRIPS was modeled on that of GPS.
Norvig I 394
Language: The Problem Domain Description Language, or PDDL (Ghallab et al., 1998)(6), was introduced as a computer-parsable, standardized syntax for representing planning problems and has been used as the standard language for the International Planning Competition since 1998. There have been several extensions; the most recent version, PDDL 3.0, includes plan constraints and preferences (Gerevini and Long, 2005)(7). Subproblems: Problem decomposition was achieved by computing a subplan for each subgoal and then stringing the subplans together in some order. This approach, called linear planning by Sacerdoti (1975)(8), was soon discovered to be incomplete. It cannot solve some very simple problems (…).A complete planner must allow for interleaving of actions from different subplans within a single sequence. The notion of serializable subgoals (Korf, 1987)(9) corresponds exactly to the set of problems for which oninterleaved planners are complete. One solution to the interleaving problem was goal-regression planning, a technique in which steps in a totally ordered plan are reordered so as to avoid conflict between subgoals. This was introduced by Waldinger (1975)(10) and also used by Warren’s (1974)(11) WARPLAN.
Partial ordering: The ideas underlying partial-order planning include the detection of conflicts (Tate, 1975a)(12) and the protection of achieved conditions from interference (Sussman, 1975)(13). The construction of partially ordered plans (then called task networks) was pioneered by the NOAH planner (Sacerdoti, 1975(8), 1977(14)) and by Tate’s (1975b(15), 1977(16)) NONLIN system. Partial-order planning dominated the next 20 years of research (…).
State-space planning: The resurgence of interest in state-space planning was pioneered by Drew McDermott’s UNPOP program (1996)(17), which was the first to suggest the ignore-delete-list heuristic (…).Bonet and Geffner’s Heuristic Search Planner (HSP) and its later derivatives (Bonet and Geffner, 1999(18); Haslum et al., 2005(19); Haslum, 2006(20)) were the first to make
Norvig I 395
state-space search practical for large planning problems. The most successful state-space searcher to date is FF (Hoffmann, 2001(21); Hoffmann and Nebel, 2001(22); Hoffmann, 2005(23)), winner of the AIPS 2000 planning competition. (Richter and Westphal, 2008)(24), a planner based on FASTDOWNWARD with improved heuristics, won the 2008 competition. >Environment/world/planning/Norvig. See also McDermot (1885(25).

1. Fikes, R. E., Hart, P. E., and Nilsson, N. J. (1972). Learning and executing generalized robot plans. AIJ,3(4), 251-288
2. Michie, D. (1974). Machine intelligence at Edinburgh. In On Intelligence, pp. 143–155. Edinburgh
University Press.
3. McDermott, D. (1978a). Planning and acting. Cognitive Science, 2(2), 71-109.
4. Fikes, R. E. and Nilsson, N. J. (1993). STRIPS, a retrospective. AIJ, 59(1–2), 227-232.
5. Newell, A. and Simon, H. A. (1961). GPS, a program that simulates human thought. In Billing, H.
(Ed.), Lernende Automaten, pp. 109-124. R. Oldenbourg.
6. Ghallab, M., Howe, A., Knoblock, C. A., and Mc-Dermott, D. (1998). PDDL—The planning domain definition language. Tech. rep. DCS TR-1165, Yale Center for Computational Vision and Control
7. Gerevini, A. and Long, D. (2005). Plan constraints and preferences in PDDL3. Tech. rep., Dept. of Electronics for Automation, University of Brescia, Italy
8. Sacerdoti, E. D. (1975). The nonlinear nature of plans. In IJCAI-75, pp. 206-214.
9. Korf, R. E. (1987). Planning as search: A quantitative approach. AIJ, 33(1), 65-88
10. Waldinger, R. (1975). Achieving several goals simultaneously. In Elcock, E. W. and Michie, D.
(Eds.), Machine Intelligence 8, pp. 94-138. Ellis Horwood
11. Warren, D. H. D. (1974). WARPLAN: A System for Generating Plans. Department of Computational
Logic Memo 76, University of Edinburgh
12. Tate, A. (1975a). Interacting goals and their use. In IJCAI-75, pp. 215-218.
13. Sussman, G. J. (1975). A Computer Model of Skill Acquisition. Elsevier/North-Holland.
14. Sacerdoti, E. D. (1977). A Structure for Plans and Behavior. Elsevier/North-Holland.
15. Tate, A. (1975b). Using Goal Structure to Direct Search in a Problem Solver. Ph.D. thesis, University of Edinburgh.
16. Tate, A. (1977). Generating project networks. In IJCAI-77, pp. 888-893.
17. McDermott, D. (1996). A heuristic estimator for means-ends analysis in planning. In ICAPS-96, pp.
142-149.
18. Bonet, B. and Geffner, H. (1999). Planning as heuristic search: New results. In ECP-99, pp. 360-372. 19. Haslum, P., Bonet, B., and Geffner, H. (2005). New admissible heuristics for domain-independent planning. In AAAI-05.
20. Haslum, P. (2006). Improving heuristics through relaxed search – An analysis of TP4 and HSP*a in the
2004 planning competition. JAIR, 25, 233-267.
21. Hoffmann, J. (2001). FF: The fast-forward planning system. AIMag, 22(3), 57-62.
22. Hoffmann, J. and Nebel, B. (2001). The FF planning system: Fast plan generation through heuristic search. JAIR, 14, 253-302.
23. Hoffmann, J. (2005). Where “ignoring delete lists” works: Local search topology in planning benchmarks. JAIR, 24, 685-758
24. Richter, S. and Westphal, M. (2008). The LAMA planner. In Proc. International Planning Competition at ICAPS.
25. McDermott, D. (1985). Reasoning about plans. In Hobbs, J. and Moore, R. (Eds.), Formal theories of the commonsense world. Intellect Books.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Planning Russell Norvig I 156
Planning/artificial intelligence/Norvig/Russell: The unpredictability and partial observability of real environments were recognized early on in robotics projects that used planning techniques, including Shakey (Fikes et al., 1972)(1) and (Michie, 1974)(2). The problems received more attention after the publication of McDermott’s (1978a) influential article, Planning and Acting(3). >Belief states/Norvig.
Norvig I 366
Problems: [a sinple] problem-solving agent (…) can find sequences of actions that result in a goal state. But it deals with atomic representations of states and thus needs good domain-specific heuristics to perform well. [A] hybrid propositional logical agent (…) can find plans without domain-specific heuristics because it uses domain-independent heuristics based on the logical structure of the problem. But it relies on ground (variable-free) propositional inference, which means that it may be swamped when there are many actions and states.
Norvig I 367
planning researchers have settled on a factored representation - one in which a state of the world is represented by a collection of variables. We use a language called PDDL, the Planning Domain Definition Language, that allows us to express all 4Tn2 actions with one action schema. Each state is represented as a conjunction of fluents that are ground, functionless atoms. Database semantics is used: the closed-world assumption means that any fluents that are not mentioned are false, and the unique names assumption means that [x] 1 and [x] 2 are distinct. Actions are described by a set of action schemas that implicitly define the ACTIONS(s) and RESULT(s, a) functions needed to do a problem-solving search. >Frame Problem.
Classical planning concentrates on problems where most actions leave most things unchanged.
Actions: A set of ground (variable-free) actions can be represented by a single action schema.
The schema is a lifted representation—it lifts the level of reasoning from propositional logic to a restricted subset of first-order logic.
Action schema: The schema consists of the action name, a list of all the variables used in the schema, a precondition and an effect.
Norvig I 367
Forward/backward (progression/regression) state-space search Cf. >Forward chaining, >backward chaining.
Norvig I 376
Heuristics for planning: a heuristic function h(s) estimates the distance from a state s to the goal and that if we can derive an admissible heuristic for this distance - one that does not overestimate - then we can use A∗ search to find optimal solutions. Representation: Planning uses a factored representation for states and action schemas. That makes it possible to define good domain-independent heuristics and for programs to automatically apply a good domain-independent heuristic for a given problem. Think of a search problem as a graph where the nodes are states and the edges are actions. The problem is to find a path connecting the initial state to a goal state. There are two ways we can relax this problem to make it easier: by adding more edges to the graph, making it strictly easier to find a path, or by grouping multiple nodes together, forming an abstraction of the state space that has fewer states, and thus is easier to search.
Norvig I 377
State abstraction: Many planning problems have 10100 states or more, and relaxing the actions does nothing to reduce the number of states. Therefore, we now look at relaxations that decrease the number of states by forming a state abstraction - a many-to-one mapping from states in the ground representation of the problem to the abstract representation. The easiest form of state abstraction is to ignore some fluents.
Norvig I 378
Heuristics: A key idea in defining heuristics is decomposition: dividing a problem into parts, solving each part independently, and then combining the parts. The subgoal independence assumption is that the cost of solving a conjunction of subgoals is approximated by the sum of the costs of solving each subgoal independently.
Norvig I 390
Planning as constraint satisfaction >Constraint satisfaction problems.
Norvig I 393
History of AI planning: AI planning arose from investigations into state-space search, theorem proving, and control theory and from the practical needs of robotics, scheduling, and other domains. STRIPS (Fikes and Nilsson, 1971)(4), the first major planning system, illustrates the interaction of these influences.
General problem solver/GPS: the General Problem Solver (Newell and Simon, 1961)(5), [was] a state-space search system that used means–ends analysis. The control structure of STRIPS was modeled on that of GPS.
Norvig I 394
Language: The Problem Domain Description Language, or PDDL (Ghallab et al., 1998)(6), was introduced as a computer-parsable, standardized syntax for representing planning problems and has been used as the standard language for the International Planning Competition since 1998. There have been several extensions; the most recent version, PDDL 3.0, includes plan constraints and preferences (Gerevini and Long, 2005)(7). Subproblems: Problem decomposition was achieved by computing a subplan for each subgoal and then stringing the subplans together in some order. This approach, called linear planning by Sacerdoti (1975)(8), was soon discovered to be incomplete. It cannot solve some very simple problems (…).A complete planner must allow for interleaving of actions from different subplans within a single sequence. The notion of serializable subgoals (Korf, 1987)(9) corresponds exactly to the set of problems for which oninterleaved planners are complete. One solution to the interleaving problem was goal-regression planning, a technique in which steps in a totally ordered plan are reordered so as to avoid conflict between subgoals. This was introduced by Waldinger (1975)(10) and also used by Warren’s (1974)(11) WARPLAN.
Partial ordering: The ideas underlying partial-order planning include the detection of conflicts (Tate, 1975a)(12) and the protection of achieved conditions from interference (Sussman, 1975)(13). The construction of partially ordered plans (then called task networks) was pioneered by the NOAH planner (Sacerdoti, 1975(8), 1977(14)) and by Tate’s (1975b(15), 1977(16)) NONLIN system. Partial-order planning dominated the next 20 years of research (…).
State-space planning: The resurgence of interest in state-space planning was pioneered by Drew McDermott’s UNPOP program (1996)(17), which was the first to suggest the ignore-delete-list heuristic (…).Bonet and Geffner’s Heuristic Search Planner (HSP) and its later derivatives (Bonet and Geffner, 1999(18); Haslum et al., 2005(19); Haslum, 2006(20)) were the first to make
Norvig I 395
state-space search practical for large planning problems. The most successful state-space searcher to date is FF (Hoffmann, 2001(21); Hoffmann and Nebel, 2001(22); Hoffmann, 2005(23)), winner of the AIPS 2000 planning competition. (Richter and Westphal, 2008)(24), a planner based on FASTDOWNWARD with improved heuristics, won the 2008 competition. >Environment/world/planning/Norvig. See also McDermot (1885(25).
1. Fikes, R. E., Hart, P. E., and Nilsson, N. J. (1972). Learning and executing generalized robot plans. AIJ,3(4), 251-288
2. Michie, D. (1974). Machine intelligence at Edinburgh. In On Intelligence, pp. 143–155. Edinburgh
University Press.
3. McDermott, D. (1978a). Planning and acting. Cognitive Science, 2(2), 71-109.
4. Fikes, R. E. and Nilsson, N. J. (1993). STRIPS, a retrospective. AIJ, 59(1–2), 227-232.
5. Newell, A. and Simon, H. A. (1961). GPS, a program that simulates human thought. In Billing, H.
(Ed.), Lernende Automaten, pp. 109-124. R. Oldenbourg.
6. Ghallab, M., Howe, A., Knoblock, C. A., and Mc-Dermott, D. (1998). PDDL—The planning domain definition language. Tech. rep. DCS TR-1165, Yale Center for Computational Vision and Control
7. Gerevini, A. and Long, D. (2005). Plan constraints and preferences in PDDL3. Tech. rep., Dept. of Electronics for Automation, University of Brescia, Italy
8. Sacerdoti, E. D. (1975). The nonlinear nature of plans. In IJCAI-75, pp. 206-214.
9. Korf, R. E. (1987). Planning as search: A quantitative approach. AIJ, 33(1), 65-88
10. Waldinger, R. (1975). Achieving several goals simultaneously. In Elcock, E. W. and Michie, D.
(Eds.), Machine Intelligence 8, pp. 94-138. Ellis Horwood
11. Warren, D. H. D. (1974). WARPLAN: A System for Generating Plans. Department of Computational
Logic Memo 76, University of Edinburgh
12. Tate, A. (1975a). Interacting goals and their use. In IJCAI-75, pp. 215-218.
13. Sussman, G. J. (1975). A Computer Model of Skill Acquisition. Elsevier/North-Holland.
14. Sacerdoti, E. D. (1977). A Structure for Plans and Behavior. Elsevier/North-Holland.
15. Tate, A. (1975b). Using Goal Structure to Direct Search in a Problem Solver. Ph.D. thesis, University of Edinburgh.
16. Tate, A. (1977). Generating project networks. In IJCAI-77, pp. 888-893.
17. McDermott, D. (1996). A heuristic estimator for means-ends analysis in planning. In ICAPS-96, pp.
142-149.
18. Bonet, B. and Geffner, H. (1999). Planning as heuristic search: New results. In ECP-99, pp. 360-372. 19. Haslum, P., Bonet, B., and Geffner, H. (2005). New admissible heuristics for domain-independent planning. In AAAI-05.
20. Haslum, P. (2006). Improving heuristics through relaxed search – An analysis of TP4 and HSP*a in the
2004 planning competition. JAIR, 25, 233-267.
21. Hoffmann, J. (2001). FF: The fast-forward planning system. AIMag, 22(3), 57-62.
22. Hoffmann, J. and Nebel, B. (2001). The FF planning system: Fast plan generation through heuristic search. JAIR, 14, 253-302.
23. Hoffmann, J. (2005). Where “ignoring delete lists” works: Local search topology in planning benchmarks. JAIR, 24, 685-758
24. Richter, S. and Westphal, M. (2008). The LAMA planner. In Proc. International Planning Competition at ICAPS.
25. McDermott, D. (1985). Reasoning about plans. In Hobbs, J. and Moore, R. (Eds.), Formal theories of the commonsense world. Intellect Books.

Russell I
B. Russell/A.N. Whitehead
Principia Mathematica Frankfurt 1986

Russell II
B. Russell
The ABC of Relativity, London 1958, 1969
German Edition:
Das ABC der Relativitätstheorie Frankfurt 1989

Russell IV
B. Russell
The Problems of Philosophy, Oxford 1912
German Edition:
Probleme der Philosophie Frankfurt 1967

Russell VI
B. Russell
"The Philosophy of Logical Atomism", in: B. Russell, Logic and KNowledge, ed. R. Ch. Marsh, London 1956, pp. 200-202
German Edition:
Die Philosophie des logischen Atomismus
In
Eigennamen, U. Wolf (Hg) Frankfurt 1993

Russell VII
B. Russell
On the Nature of Truth and Falsehood, in: B. Russell, The Problems of Philosophy, Oxford 1912 - Dt. "Wahrheit und Falschheit"
In
Wahrheitstheorien, G. Skirbekk (Hg) Frankfurt 1996


Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010
Politics Aristotle Bubner I 176
Politics/Aristotle: as long as man lives together with others, he cannot concentrate on the idle show, but must choose the "second best way" of the political actor. >Community/Aristotle.
I 179
Practice/Aristotle: must perform an ordering performance within the contingency. The objective is never given, but must be actively introduced into the practical situation.
The possibilities for action must be structured.
>Practise/Aristotle.
Def Prohairesis/Aristotle: the selection of the most appropriate means.
Politics/Aristotle: Politics only means realizing on a large scale what every concrete process of action already performs in the small scale.
I 188
Politics/Zoon Politikon/Aristotle: this property is attributed to man because of his speech! >Language/Aristotle.
Political institutions are to be understood from an ethics point of view.
Politics is not simply a ruling order, (VsPlato) with a good ruler like in Hobbes or Max Weber.
>Philosopher King/Plato, >Politics/Weber, >Government/Weber, >Politics/Hobbes, >Order/Hobbes, >Social contract/Hobbes.
The ruler is not a large-scale housekeeper.
A common goal is to be investigated.
Politics/Aristotle: Starting point: village, which does not only exist due to everyday life needs.
      In the polis, the character of "self-sufficiency" replaces the elementary natural conditionality.
Objective: Eudaimonia, the "good life", in this highest of all objectives, the practice structure returns, as it were, reflexively to itself.
Problem: Contradictory towards the natural: on the one hand, the essence of practice as a goal has been politically entered into its own telos, and this legitimates talk of man as a political entity by nature.
On the other hand, the natural conditions have been overcome thanks to a self-sufficient practice.
Nothing but practice itself, no nature defines the good. This self-determination means freedom.


Gaus I 314
Politics/literature/Aristotle/Keyt/Miller: (After 100 years Newman, 1887-1902(1), is still the most important work on Aristotle's Politics. Two recent commentaries are the unfinished series
Schütrumpf, 1991a(2); 1991 b(3); Schütrumpf and Gehrke, 1996(4); and the four volumes of the Clarendon Aristotle Series: Saunders, 1995(5); Robinson, 1995(6); Kraut, 1997a(7); and Keyt, 1999(8).
Miller, 1995(9), and Kraut, 2002(10), are major studies of Aristotle's political philosophy. Lord, 1982(11), and Curren, 2000(12), are studies of Aristotle's views on education.
Six collections of essays should be noted: Barnes, Schofield and Sorabji, 1977(13); Patzig, 1990(14); Keyt and Miller, 1991(15); Lord, O'Connor and Bodéüs, 1991(16).
Neo-Aristotelianism: Aubenque, 1993(17); Höffe, 2001(18). Galston, 1980,(19) is an example of neo-Aristotelianism.)

1. Newman, W. L. (1887-1902) The Politics of Aristotle, 4 vols. Oxford: Clarendon.
2. Schütrumpf, Eckart (1991a) Aristoteles Politik, Buch I. Berlin: Akademie.
3. Schütrumpf, Eckart (1991b) Aristoteles Politik, Bücher Il und Ill. Berlin: Akademie.
4: Schütrumpf, Eckart and Hans-Joachim Gehrke (1996) Aristoteles Politik, Bücher IV—VI. Berlin: Akademie.
5. Saunders, Trevor J. (1995) Aristotle Politics Books I and II. Oxford: Clarendon.
6. Robinson, Richard (1995) Aristotle Politics Books III and IV with a Supplementary Essay by David Keyt (1st edn 1962). Oxford: Clarendon.
7. Kraut, Richard (1997a) Aristotle Politics Books VII and VIII. Oxford: Clarendon.
8. Keyt, David (1999) Aristotle Politics Books V and VI. Oxford: Clarendon.
9. Miller, Fred D. (1995) Nature, Justice, and Rights in Aristotle's Politics. Oxford: Claredon.
10. Kraut, Richard (2002) Aristotle: Political Philosophy. Oxford: Oxford University Press.
11. Lord, Carnes (1982) Education and Culture in the Political Thought of Aristotle. Ithaca, NY: Cornell University Press.
12. Curren, Randall R. (2000) Aristotle on the Necessity of Public Education. Lanham, MD: Rowman and Littlefield.
13. Barnes, Jonathan, Malcolm Schofield and Richard Sorabji, eds (1977) Articles on Aristotle. Vol. Il, Ethics and Politics. London: Duckworth.
14. Patzig, Günther, ed. (1990) Aristoteles ' 'Politik ': Akten des XI Symposium Aristotelicum. Göttingen: Vandenhoeck und Ruprecht.
15. Keyt, David and Fred D. Miller, eds (1991) A Companion to Aristotle's Politics. Oxford: Blackwell.
16. Lord, Carnes, David K. O'Connor and Richard Bodéüs, eds (1991) Essays on the Foundations of Aristotelian Political Science. Berkeley, CA: University of Califorma Press.
17. Aubenque, Pierre, ed. (1993) Aristote Politique: Études sur la Politique d 'Aristote. Paris: Presses Universitaires de France.
18. Höffe, Otfried, ed. (2001) Aristoteles Politik. Berlin: Akademie.
19. Galston, William A. (1980) Justice and the Human Good. Chicago: University of Chicago Press.

Keyt, David and Miller, Fred D. jr. 2004. „Ancient Greek Political Thought“. In: Gaus, Gerald F. & Kukathas, Chandran 2004. Handbook of Political Theory. SAGE Publications


Bu I
R. Bubner
Antike Themen und ihre moderne Verwandlung Frankfurt 1992

Gaus I
Gerald F. Gaus
Chandran Kukathas
Handbook of Political Theory London 2004
Power Freeden Gaus I 8
Power/Freeden: On the account offered here, although power and control remain central features of ideologies, they are far less insidious. Rather, they reflect the core of the political: the necessity of ordering, deciding and regulating the combined affairs of groups of people, and through that of enabling individuals to have a say in their own fortunes. >Politics/Freeden.
Freeden, M. 2004. „Ideology, Political Theory and Political Philosophy“. In: Gaus, Gerald F. 2004. Handbook of Political Theory. SAGE Publications.


Gaus I
Gerald F. Gaus
Chandran Kukathas
Handbook of Political Theory London 2004
Power Morris Gaus I 201
Power/law/hierarchy/order/Morris: What does it mean to say that law is ultimately backed by sanctions or ultimately a matter of force? The term 'ultimate' is one of the most opaque in philosophy and social theory and should be used with care. Even if we were able to find in every legal system a hierarchical ordering of authorities, it is very unlikely that powers generally will be so ordered. That is, it is very unlikely that we can order power relations in this way, so that for any pair of powers one is greater than the other and the set of all powers is an ordering (i.e. transitive). If this is right, it means that the concept of an ultimate power will be ill-defined. This means that it is unclear and likely misleading to talk of 'ultimate' powers, for there may never be one power that is so placed that it is 'ultimate' or 'final' (see Morris, 1998(1): ch. 8). >Social order/Morris.

1. Morris, Christopher W. (1998) An Essay on the Modern State. Cambridge: Cambridge University Press.

Morris, Christopher W. 2004. „The Modern State“. In: Gaus, Gerald F. & Kukathas, Chandran 2004. Handbook of Political Theory. SAGE Publications


Gaus I
Gerald F. Gaus
Chandran Kukathas
Handbook of Political Theory London 2004
Privacy Protection Zittrain I 184
Software/privacy protection/Zittrain: ((s) This is about the maintaining of privay as software beco0mes service): The use of our PCs is shrinking to that of mere workstations, with private data stored remotely in the hands of third parties.
I 185
The latest version of Google Desktop is a PC application that offers a “search across computers” feature. It is advertised as allowing users with multiple computers to use one computer to find documents that are stored on another. (1) The application accomplishes this by sending an index of the contents of users’ documents to Google itself. (2) ((s) Written in 2008). The movement of data from the PC means that warrants
I 186
served upon personal computers and their hard drives will yield less and less information as the data migrates onto the Web, driving law enforcement to the networked third parties now hosting that information. When our diaries, e-mail, and documents are no longer stored at home but instead are business records held by a dot-com, nearly all formerly transient communication ends up permanently and accessibly stored in the hands of third parties, and subject to comparatively weak statutory and constitutional protections against surveillance. (3) A warrant is generally required for the government to access data on one’s own PC, and warrants require law enforcement to show probable cause that evidence of a crime will be yielded by the search. (4) In other words, the government must surmount a higher hurdle to search one’s PC than to eavesdrop on one’s data communications, and it has the fewest barriers when obtaining data stored elsewhere. (5)

1. See Google, Google Desktop—Features, http://desktop.google.eom/features.html# searchremote (last visited May 15, 2007).
2. Matthew Fordahl, How Google’s Desktop Search Works, MSNBC.com, Oct. 14, 2004, http://www.msnbc.msn.com/id/6251128/.
3. See, e.g., Declan McCullagh, Police Blotter: Judge Orders Gmail Disclosure, CNET NEWS.COM, Mar. 17, 2006, http://news.com.com/Police+blotter+Judge+orders+Gmail+disclosure/2100-1047_3-6050295.html (reporting on a hearing that contested a court subpoena ordering the disclosure of all e-mail messages, including deleted ones, from a Gmail account).
4. Orin Kerr, Search and Seizure: Past, Present, and Future, OXFORD ENCYCLOPEDIA OF LEGAL HISTORY (2006).
5. Cf. Orin S. Kerr, Searches and Seizures in a Digital World, 119 HARV. L. REV. 531, 557 (2005) (“Under Arizona v. Hicks (480 U.S. 321 (1987)), merely copying information does not seize anything.” (footnote omitted)).

Zittrain I
Jonathan Zittrain
The Future of the Internet--And How to Stop It New Haven 2009

Rate of Return Economic Theories Harcourt I 159
Rate of return/Economic theories/Harcourt: Pasinetti(1) distinguishes two meanings of Fisher's(2) 'rate of return on sacrifice' or 'rate of return over cost'. a) The first is the rate of interest at which two techniques (options, projects, going concerns, economic systems) are equi-profitable, i.e. that rate of interest which when used as the discount factor equalises the present values of two alternative streams of expected receipts (Fisherian incomes) and expenditures - call it RF1.
>Irving Fisher.
b) The second relates to the ratio of the expected increase in perpetuity in the production of a commodity to the withdrawal from consumption or other uses of the present annual flow of the commodity, the withdrawal or sacrifice being needed to make the investment that will make the
increase in production possible. (RF2)
>Rate of return/Fisher, >Rate of return/Pasinetti.
Harcourt I 162
Pasinetti compares, one with another, stationary states in which commodities are produced by commodities and labour in given technical proportions in any one technique and its activities. The relative prices of commodities and of one or other of the factor prices in this system are indeterminate until either r or w is given exogenously. Can either of Fisher's concepts supply the missing link and close the system? When we come to RF2, which essentially is to tell us whether or not to go over from one system to another, the extra outputs which are to be gained and the capital stocks with which they are to be associated (and in which, in general, there will be more of some commodities and less of others, the latter becoming redundant), have to be valued at a set of prices in order that RF2 may be computed.
Harcourt I 163
So, in general, RF2 is not independent of r and the accompanying set of relative prices. If we arbitrarily choose a value of r we may calculate RF2 and solve the problem of the choice of technique by seeing whether RF2 <= r. In general, RF2 not equal to RF1, though there are cases where their values coincide (including Solow's examples in Solow [1967(3), 1970(4)]), namely, in a onecommodity model, or when we consider an individual producer operating under perfectly competitive conditions, or at a switch-point. In the present context of stationary state comparisons, they coincide
1) if RF1 exists,
2) if RF2 is calculated in terms of the relative price system corresponding to the value of RFi and, 3) if there is no redundancy of the commodities in the means of production when the transition is made from one state to the other (see Pasinetti [1969](1), p. 515). It is clear that in these special circumstances RF2 will be equal to RF1 (…).
Harcourt I 164
Pasinetti/Harcourt: If the number of techniques (economies) tends to infinity, switch points become irrelevant for now there always exists another more profitable technique in between two equi-profitable ones. Thus each rate of profits will be associated with a unique technique (and economy). (This is the basis of Pasinetti's contention that the traditional definition of the marginal product of capital is associated with situations in which only one technique is the most profitable at any given rate of profits (…).
We thus arrive at an inverse monotonic relationship between a physical rate of return - RF2 - and an increasing quantity of (physical) capital. Moreover, it is an inverse relationship which permits 'an extension to the rate of profits of the marginal theory of prices' in which prices are 'indexes of scarcity' - as indeed they are here, for the smaller, i.e. the more scarce, is the existing quantity of corn, the higher is the physical rate of return (and of profits) to more savings.
Harcourt I 165
The upshot of the argument is that RF2 is intended to form the basis in a realistic heterogeneous capital-goods model of a function which relates amounts wanted - values - to scarcity prices. The proof (for the discrete case) is very simple. The malleability assumption means ((s) that technical equipment and thus progress can be seen as malleable as a construction kit) that there are no discarded capital goods when one system supersedes another, so that
RF2 = σ = p(r)(Qβ-Qα) / p(r)(Kβ – Kα)

where p is the vector of prices corresponding to the rate of profits (r) and the Qs and Ks are collections of heterogeneous goods treated as outputs and inputs respectively. But the 'unobtrusive postulate' implies that there can be only one switch point between any two techniques and that there is a definite ordering on either side of the switch-point techniques, properties associated with the physical rate of return (…).
>Rate of return/Harcourt.

1. Pasinetti, L. L. [1969] 'Switches of Technique and the "Rate of Return" in Capital Theory', Economic Journal, LXXIX, pp. 508-31.
2. Fisher, Irving [1930] The Theory of Interest (New York: Macmillan).
3. Solow, R. M. [1967] 'The Interest Rate and Transition between Techniques', Socialism, Capitalism and Economic Growth, Essays presented to Maurice Dobb, ed. by C. H. Feinstein (Cambridge: Cambridge University Press), pp. 30-9.
4. Solow, R. M. [1970] 'On the Rate of Return: Reply to Pasinetti Economic Journal, LXXX, pp.423-8.


Harcourt I
Geoffrey C. Harcourt
Some Cambridge controversies in the theory of capital Cambridge 1972
Rate of Return Harcourt Harcourt I 167
Rate of return/technical progress/Harcourt: „Malleability“ ((s) that technical equipment and thus progress can be seen as malleable as a construction kit)…. 'Malleability' - no redundancy – gets rid of D. H. Robertson's grumble, see Robertson [1949](1); all existing capital goods may be used and workers may remain, if they wish, teetotal. >Marginal product of labour/Robertson.
The high number of techniques confines the distance which p may move away from r*. And, most striking of all, if we let the techniques become very many, approaching an infinite number, so that the change in the magnitude of r needed to go from one to another becomes infinitesimally small then, due to the 'unobtrusive postulate', the differences in values of capital goods and outputs per man likewise become smaller and smaller.
((s) the 'unobtrusive postulate': the 'unobtrusive postulate' implies that there can be only one switch point between any two techniques and that there is a definite ordering on either side of the switch-point techniques, properties associated with the physical rate of return (…).)
Harcourt: In the limit, both change instantaneously, the switch point becomes irrelevant (as in the artificial case) and 'at any level of the rate of profits, there always is one technique which is the most profitable one . . . at the same time any change in the rate of profits, no matter how small, always causes a change in the most profitable technique', Pasinetti [1969](2), p. 521.
Such, perhaps, is the post- (technical) revolution which lies behind Irving Fisher's pre-revolution investment-opportunity schedules, as brought into the modern era by Hirshleifer [1958](3).
I add 'perhaps' because Fisher's examples are always for individuals. It does, however, seem-and this is confirmed by Stigler [1941](4) -that the early neoclassicals were after bigger game than a partial analysis of an individual firm or industry and the scope of the questions examined by Dewey
[1965](5) in the book he is pleased to call Modern Capital Theory confirms that this view still appeals to some.
What Marshall was after we can never really be sure; for, characteristically, he always shied away from openly committing himself. (Keynes [1933](6), pp. 223-4, though, had no such scruples in his assessment of Marshall's stand - except on the subject of French letters, for which see Holroyd [1968 (7)], pp. 514-15, n1.)
But the results of the reswitching and capital-reversing debate show that there is no justification at all for the 'unobtrusive postulate', for we know that in a heterogeneous capital-goods model (where capital goods are really so and not just jelly in disguise), a lower rate of profits may well be associated with a lower output per head, with a lower value of capital per head and with a lower net output-capital ratio.
Harcourt I 168
Rate of profit: Moreover, the same technique may be the most profitable at two widely separated rates of profits. Technical progress: Nearness of techniques as assessed by the rate of profits at which they are most profitable may tell us nothing at all about how close (or far apart) are their values of capital or outputs per head. And - most damaging of all for RF2 as a surrogate for a well-behaved physical rate of return, i.e. a marginal product which declines as the value of capital increases - the difference (r - p(r)) may become indifferently positive or negative at any level of the rate of profits, so losing the properties of a physical rate of return.
>Surrogate production function, >Rate of profit, >Rate of return/Economic theories.

1. Robertson, D. H. [1949] 'Wage Grumbles', Readings in the Theory of Income Distribution (American Economic Association), S. 221-36.
2. Pasinetti, L. L. [1969] 'Switches of Technique and the "Rate of Return" in Capital Theory', Economic Journal, LXXIX, pp. 508-31.
3. Hirshleifer, J. [1958] 'On the Theory of Optimal Investment Decision', Journal of Political Economy, LXVI, S. 329-52.
4. Stigler, George J. [1941] Production and Distribution Theories: The Formative Period (New York: Macmillan).
5. Dewey, Donald [1965] Modern Capital Theory (New York: Columbia University
Press).
6. Keynes, J. M. [1933] Essays in Biography (London: Macmillan).
7. Holroyd, Michael [1968] Lytton Strachey: a Critical Biography. Vol. 11 The Years of Achievement (1910-1932) (London: Heinemann).

Harcourt I
Geoffrey C. Harcourt
Some Cambridge controversies in the theory of capital Cambridge 1972

Second Order Logic, HOL Cresswell I 134
Imbroglio/Geach/Cresswell: e.g. Each of two Turks fought against each of two Greeks. - Problem: the following does not work: each of two Greeks was F and each of two Turks was F. >Quantification over properties.
I 135
E.g. most fundamentalists are creationists. Problem: it is not easy with two predicates F and C - it is not possible in 1st order logic to bring it in an order.
I 137
Solution: 2nd order Logic: here we can say that there is a 1:1 function of F-creationists to fundamentalists, but not vice versa. >Everyday language, >Unambiguity, >Ordering.

Cr I
M. J. Cresswell
Semantical Essays (Possible worlds and their rivals) Dordrecht Boston 1988

Cr II
M. J. Cresswell
Structured Meanings Cambridge Mass. 1984

Selection Dawkins I 38
Selection/Dawkins: Thesis: Selection occurs at the lowest level. (Not species, not individual, but genes, unit of heredity). >Genes, >Genes/Dawkins.
I 42
Selection/Dawkins: Earliest form of selection: simply a selection of more stable molecules and rejecting unstable ones. It would not make sense to shake the right number of atoms and the right amount of added energy to expect a human to come out. The age of the universe would not suffice for that.
I 73
Order/ordering: The cards themselves survive the shuffling. Selection/Dawkins: If genes always mixed, selection would be absolutely impossible.
I 158
Def Degree of relationship/Dawkins: generation span: steps on the family tree. To Uncle: 3 steps: the common ancestor is e.g. A's father and B's grandfather. Degree of Relationship: per generation span ½ multiplied by itself.
For g steps (1/2) g.
But that is only part of the degree of relationship. In case of several common relatives they must also be determined.
I 158
Selection/relationship/altruism/Dawkins: Now we can correctly calculate the chances for the multiplication of genes for altruism: E.g., A gene for the suicidal rescue of five cousins ​​would not become more numerous, but probably one for the suicidal rescue of five brothers or ten cousins.
>Altruism.
I 162
Family altruism/Dawkins: parental care is merely a special case of family altruism. The fact that siblings do not exchange genes is not relevant, because they have obtained identical copies of the same genes from the same parents.
Family Selection/Kin Selection/DawkinsVsWilson, E.O.: transfers the concept of group selection to family. Now, however, the core of Hamilton's argument is that the separation between family and non-family is not clear, but a question of mathematical probability.
Hamilton's thesis(1) does not imply that animals are selfless towards all family members and self-serving to all outsiders.
I 164
DawkinsVsWilson: He does not consider offspring as relatives! (I 461: Wilson has now withdrawn that). Def Group selection/Dawkins: different survival rate in groups of individuals.
I 164
Kin selection/Dawkins: Of course animals cannot be expected to count how many relatives they are saving!
I 462
Kin selection/Dawkins: It is a frequent mistake for students to assume that animals must count how many relatives they are saving.
I 165
Kin selection/Dawkins: To determine the degree of relationship actuarial weightings can be used as a basis. How much of my wealth would I invest in the life of another individual.
I 166
An animal can behave as if it had done this calculation. E.g. just as a human catches a ball as if he had solved a series of differential equations.
I 372
Gene/selection/Dawkins: Under reasonable consideration, selection does not directly affect the genes. The DNA is spun into proteins, wrapped in membranes, shielded from the world and invisible to natural selection. (Like GouldVsDawkins.) The selection would also hardly have a criterion for DNA molecules. All genes look the same just like all tapes look the same. Genes show in their effects.
((s) effect creates identity.)

1. Hamilton, W.D. 1964. The Genetical Evolution of Social Behavior. In: Journal of Theoretical Biology 7. pp- 1-16; 17-52.

Da I
R. Dawkins
The Selfish Gene, Oxford 1976
German Edition:
Das egoistische Gen, Hamburg 1996

Da II
M. St. Dawkins
Through Our Eyes Only? The Search for Animal Consciousness, Oxford/New York/Heidelberg 1993
German Edition:
Die Entdeckung des tierischen Bewusstseins Hamburg 1993

Sense Data Theory Goodman IV 18
Some authors assume that sense data are primarily given. The difficulty with such speech is that neither a sensation nor anything else is unlabelled. The thinking is actively involved in the perception. It makes ordering necessary identifiable.
>World/Thinking, >Perception, >Nature, >Reality; cf. >Thing in itself.

G IV
N. Goodman
Catherine Z. Elgin
Reconceptions in Philosophy and Other Arts and Sciences, Indianapolis 1988
German Edition:
Revisionen Frankfurt 1989

Goodman I
N. Goodman
Ways of Worldmaking, Indianapolis/Cambridge 1978
German Edition:
Weisen der Welterzeugung Frankfurt 1984

Goodman II
N. Goodman
Fact, Fiction and Forecast, New York 1982
German Edition:
Tatsache Fiktion Voraussage Frankfurt 1988

Goodman III
N. Goodman
Languages of Art. An Approach to a Theory of Symbols, Indianapolis 1976
German Edition:
Sprachen der Kunst Frankfurt 1997

Sense Data Theory Kuhn I 125
Sense-data/Illusion/Test/Kuhn: If the experimenter draws attention to the picture puzzle, he is himself the source of the sense-data.
I 141
Order/Ordering/Sense-data/Stimuli/Kuhn: Questions about retinal impressions already presuppose a world that is perceptually and theoretically divided in a certain way. See also >Perception, >Material Things, >Reality, >Stimuli/Kuhn.

Kuhn I
Th. Kuhn
The Structure of Scientific Revolutions, Chicago 1962
German Edition:
Die Struktur wissenschaftlicher Revolutionen Frankfurt 1973

Sequences Sequence, logic: ordering within a set of objects (numbers, statements). See also sequent calculus, natural deduction, satisfaction.

Sequential Decision Making Norvig Norvig I 645
Sequential Decision Making/AI research/Norvig/Russell: [this is about] the computational issues involved in making decisions in a stochastic environment. Sequential decision problems incorporate utilities, uncertainty, and sensing, and include search and planning problems as special cases. >Planning/Norvig, >Decision networks/Norvig, >Decision theory/AI Research, >Utility/AI Research, >Utility theory/Norvig, >Environment/AI research, >Multi-attribute utility theory/AI research.
Norvig I 649
Optimal policy: the optimal policy for a finite horizon is non-stationary. With no fixed time limit, on the other hand, there is no reason to behave differently in the same state at different times. Hence, the optimal action depends only on the current state, and the optimal policy is stationary. States: In the terminology of multi-attribute utility theory, each state si can be viewed as an attribute of the state sequence [s0, s1, s2 . . .]. >Values/AI research.
Norvig I 684
Sequential decision problems in uncertain environments, also called Markov decision processes, or MDPs, are defined by a transition model specifying the probabilistic outcomes of actions and a reward function specifying the reward in each state.
Norvig I 685
Richard Bellman developed the ideas underlying the modern approach to sequential decision problems while working at the RAND Corporation beginning in 1949. (…) Bellman’s book, Dynamic Programming (1957)(1), gave the new field a solid foundation and introduced the basic algorithmic approaches. Ron Howard’s Ph.D. thesis (1960)(2) introduced policy iteration and the idea of average reward for solving infinite-horizon problems. Several additional results were introduced by Bellman and Dreyfus (1962)(3). Modified policy iteration is due to van Nunen (1976)(4) and Puterman and Shin (1978)(5). Asynchronous policy iteration was analyzed by Williams and Baird (1993)(6) (…). The analysis of discounting in terms of stationary preferences is due to Koopmans (1972)(7). The texts by Bertsekas (1987)(8), Puterman (1994)(9), and Bertsekas and Tsitsiklis (1996)(10) provide a rigorous introduction to sequential decision problems. Papadimitriou and Tsitsiklis (1987)(11) describe results on the computational complexity of MDPs. Seminal work by Sutton (1988)(12) and Watkins (1989)(13) on reinforcement learning methods for solving MDPs played a significant role in introducing MDPs into the AI community, as did the later survey by Barto et al. (1995)(14). >Markov Decision Processes/Norvig.


1. Bellman, R. E. (1957). Dynamic Programming. Princeton University Press
2. Howard, R. A. (1960). Dynamic Programming and Markov Processes. MIT Press.
3. Bellman, R. E. and Dreyfus, S. E. (1962). Applied Dynamic Programming. Princeton University Press.
4. van Nunen, J. A. E. E. (1976). A set of successive approximation methods for discounted Markovian decision problems. Zeitschrift fur Operations Research, Serie A, 20(5), 203–208.
5. Puterman, M. L. and Shin, M. C. (1978). Modified policy iteration algorithms for discounted Markov decision problems. Management Science, 24(11), 1127-1137.
6. Williams, R. J. and Baird, L. C. I. (1993). Tight performance bounds on greedy policies based on imperfect value functions. Tech. rep. NU-CCS-93-14, College of Computer Science, Northeastern University.
7. Koopmans, T. C. (1972). Representation of preference orderings over time. In McGuire, C. B. and Radner, R. (Eds.), Decision and Organization. Elsevier/North-Holland.
8. Bertsekas, D. (1987). Dynamic Programming: Deterministic and Stochastic Models. Prentice-Hall.
9. Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley
10. Bertsekas, D. and Tsitsiklis, J. N. (1996). Neurodynamic programming. Athena Scientific.
11. Papadimitriou, C. H. and Tsitsiklis, J. N. (1987). The complexity of Markov decision processes.
Mathematics of Operations Research, 12(3), 441-450.
12. Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning,
3, 9-44.
13. Watkins, C. J. (1989). Models of Delayed Reinforcement Learning. Ph.D. thesis, Psychology Department, Cambridge University.
14. Barto, A. G., Bradtke, S. J., and Singh, S. P. (1995). Learning to act using real-time dynamic programming. AIJ, 73(1), 81-138.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Software Zittrain I 184
Software/privacy protection/Zittrain: ((s) This is about the maintaining of privay as software beco0mes service): The use of our PCs is shrinking to that of mere workstations, with private data stored remotely in the hands of third parties.
I 185
The latest version of Google Desktop is a PC application that offers a “search across computers” feature. It is advertised as allowing users with multiple computers to use one computer to find documents that are stored on another. (1) The application accomplishes this by sending an index of the contents of users’ documents to Google itself. (2) ((s) Written in 2008). The movement of data from the PC means that warrants
I 186
served upon personal computers and their hard drives will yield less and less information as the data migrates onto the Web, driving law enforcement to the networked third parties now hosting that information. When our diaries, e-mail, and documents are no longer stored at home but instead are business records held by a dot-com, nearly all formerly transient communication ends up permanently and accessibly stored in the hands of third parties, and subject to comparatively weak statutory and constitutional protections against surveillance. (3) A warrant is generally required for the government to access data on one’s own PC, and warrants require law enforcement to show probable cause that evidence of a crime will be yielded by the search. (4) In other words, the government must surmount a higher hurdle to search one’s PC than to eavesdrop on one’s data communications, and it has the fewest barriers when obtaining data stored elsewhere. (5)

1. See Google, Google Desktop—Features, http://desktop.google.eom/features.html# searchremote (last visited May 15, 2007).
2. Matthew Fordahl, How Google’s Desktop Search Works, MSNBC.com, Oct. 14, 2004, http://www.msnbc.msn.com/id/6251128/.
3. See, e.g., Declan McCullagh, Police Blotter: Judge Orders Gmail Disclosure, CNET NEWS.COM, Mar. 17, 2006, http://news.com.com/Police+blotter+Judge+orders+Gmail+disclosure/2100-1047_3-6050295.html (reporting on a hearing that contested a court subpoena ordering the disclosure of all e-mail messages, including deleted ones, from a Gmail account).
4. Orin Kerr, Search and Seizure: Past, Present, and Future, OXFORD ENCYCLOPEDIA OF LEGAL HISTORY (2006).
5. Cf. Orin S. Kerr, Searches and Seizures in a Digital World, 119 HARV. L. REV. 531, 557 (2005) (“Under Arizona v. Hicks (480 U.S. 321 (1987)), merely copying information does not seize anything.” (footnote omitted)).


Zittrain I
Jonathan Zittrain
The Future of the Internet--And How to Stop It New Haven 2009

Solipsism Chisholm II 31
Vssolipsism/Rutte: the solipsism has testing problems internal to the subject: believes in past and future personal experiences as well as orderings, makes just as strong assumptions and has equal burden of proof as the realism. >Realism, >Verification, >Proofs, >Method, >Past, >Future.

Rutte, Heiner. Mitteilungen über Wahrheit und Basis empirischer Erkenntnis, mit besonderer Berücksichtigung des Wahrnehmungs- und Außenweltproblems. In: M.David/L. Stubenberg (Hg) Philosophische Aufsätze zu Ehren von R.M. Chisholm Graz 1986

Chisholm I
R. Chisholm
The First Person. Theory of Reference and Intentionality, Minneapolis 1981
German Edition:
Die erste Person Frankfurt 1992

Chisholm II
Roderick Chisholm

In
Philosophische Aufsäze zu Ehren von Roderick M. Ch, Marian David/Leopold Stubenberg Amsterdam 1986

Chisholm III
Roderick M. Chisholm
Theory of knowledge, Englewood Cliffs 1989
German Edition:
Erkenntnistheorie Graz 2004

Statistical Learning Norvig Norvig I 825
Statistical learning/Norvig/Russell: Statistical learning methods range from simple calculation of averages to the construction of complex models such as Bayesian networks. They have applications throughout computer science, engineering, computational biology, neuroscience, psychology, and physics. ((s) Cf. >Prior knowledge/Norvig). Bayesian learning methods: formulate learning as a form of probabilistic inference, using the observations to update a prior distribution over hypotheses. This approach provides a good way to implement Ockham’s razor, but quickly becomes intractable for complex hypothesis spaces.
Maximum a posteriori (MAP) learning: selects a single most likely hypothesis given the data. The hypothesis prior is still used and the method is often more tractable than full Bayesian learning.
Maximum-likelihood learning: simply selects the hypothesis that maximizes the likelihood of the data; it is equivalent to MAP learning with a uniform prior. In simple cases such as linear regression and fully observable Bayesian networks, maximum-likelihood solutions can be found easily in closed form. Naive Bayes learning is a particularly effective technique that scales well.
Hidden variables/latent variables: When some variables are hidden, local maximum likelihood solutions can be found using the EM algorithm. Applications include clustering using mixtures of Gaussians, learning Bayesian networks, and learning hidden Markov models.
Norvig I 823
EM Algorithm: Each involves computing expected values of hidden variables for each example and then recomputing the parameters, using the expected values as if they were observed values.
Norvig I 825
Learning the structure of Bayesian networks is an example of model selection. This usually involves a discrete search in the space of structures. Some method is required for trading off model complexity against degree of fit. Nonparametric models: represent a distribution using the collection of data points. Thus, the number of parameters grows with the training set. Nearest-neighbors methods look at the examples nearest to the point in question, whereas kernel methods form a distance-weighted combination of all the examples.
History: The application of statistical learning techniques in AI was an active area of research in the early years (see Duda and Hart, 1973)(1) but became separated from mainstream AI as the latter field concentrated on symbolic methods. A resurgence of interest occurred shortly after the introduction of Bayesian network models in the late 1980s; at roughly the same time,
Norvig I 826
statistical view of neural network learning began to emerge. In the late 1990s, there was a noticeable convergence of interests in machine learning, statistics, and neural networks, centered on methods for creating large probabilistic models from data. Naïve Bayes model: is one of the oldest and simplest forms of Bayesian network, dating back to the 1950s. Its surprising success is partially explained by Domingos and Pazzani (1997)(2). A boosted form of naive Bayes learning won the first KDD Cup data mining competition (Elkan, 1997)(3). Heckerman (1998)(4) gives an excellent introduction to the general problem of Bayes net learning. Bayesian parameter learning with Dirichlet priors for Bayesian networks was discussed by Spiegelhalter et al. (1993)(5). The BUGS software package (Gilks et al., 1994)(6) incorporates many of these ideas and provides a very powerful tool for formulating and learning complex probability models. The first algorithms for learning Bayes net structures used conditional independence tests (Pearl, 1988(7); Pearl and Verma, 1991(8)). Spirtes et al. (1993)(9) developed a comprehensive approach embodied in the TETRAD package for Bayes net learning. Algorithmic improvements since then led to a clear victory in the 2001 KDD Cup data mining competition for a Bayes net learning method (Cheng et al., 2002)(10). (The specific task here was a bioinformatics problem with 139,351 features!) A structure-learning approach based on maximizing likelihood was developed by Cooper and Herskovits (1992)(11) and improved by Heckerman et al. (1994)(12).
Several algorithmic advances since that time have led to quite respectable performance in the complete-data case (Moore and Wong, 2003(13); Teyssier and Koller, 2005(14)). One important component is an efficient data structure, the AD-tree, for caching counts over all possible combinations of variables and values (Moore and Lee, 1997)(15). Friedman and Goldszmidt (1996)(16) pointed out the influence of the representation of local conditional distributions on the learned structure.
Hidden variables/missing data: The general problem of learning probability models with hidden variables and missing data was addressed by Hartley (1958)(17), who described the general idea of what was later called EM and gave several examples. Further impetus came from the Baum–Welch algorithm for HMM learning (Baum and Petrie, 1966)(18), which is a special case of EM. The paper by Dempster, Laird, and Rubin (1977)(19), which presented the EM algorithm in general form and analyzed its convergence, is one of the most cited papers in both computer science and statistics. (Dempster himself views EM as a schema rather than an algorithm, since a good deal of mathematical work may be required before it can be applied to a new family of distributions.) McLachlan and Krishnan (1997)(20) devote an entire book to the algorithm and its properties. The specific problem of learning mixture models, including mixtures of Gaussians, is covered by Titterington et al. (1985)(21). Within AI, the first successful system that used EM for mixture modeling was AUTOCLASS (Cheeseman et al., 1988(22); Cheeseman and Stutz, 1996(23)). AUTOCLASS has been applied to a number of real-world scientific classification tasks, including the discovery of new types of stars from spectral data (Goebel et al., 1989)(24) and new classes of proteins and introns in DNA/protein sequence databases (Hunter and States, 1992)(25).
Maximum-likelihood parameter learning: For maximum-likelihood parameter learning in Bayes nets with hidden variables, EM and gradient-based methods were introduced around the same time by Lauritzen (1995)(26), Russell et al. (1995)(27), and Binder et al. (1997a)(28). The structural EM algorithm was developed by Friedman (1998)(29) and applied to maximum-likelihood learning of Bayes net structures with
Norvig I 827
latent variables. Friedman and Koller (2003)(30). describe Bayesian structure learning. Causality/causal network: The ability to learn the structure of Bayesian networks is closely connected to the issue of recovering causal information from data. That is, is it possible to learn Bayes nets in such a way that the recovered network structure indicates real causal influences? For many years, statisticians avoided this question, believing that observational data (as opposed to data generated from experimental trials) could yield only correlational information—after all, any two variables that appear related might in fact be influenced by a third, unknown causal factor rather than influencing each other directly. Pearl (2000)(31) has presented convincing arguments to the contrary, showing that there are in fact many cases where causality can be ascertained and developing the causal network formalism to express causes and the effects of intervention as well as ordinary conditional probabilities.
Literature on statistical learning and pattern recognition: Good texts on Bayesian statistics include those by DeGroot (1970)(32), Berger (1985)(33), and Gelman et al. (1995)(34). Bishop (2007)(35) and Hastie et al. (2009)(36) provide an excellent introduction to statistical machine learning.
For pattern classification, the classic text for many years has been Duda and Hart (1973)(1), now updated (Duda et al., 2001)(37). The annual NIPS (Neural Information Processing Conference) conference, whose proceedings are published as the series Advances in Neural Information Processing Systems, is now dominated by Bayesian papers. Papers on learning Bayesian networks also appear in the Uncertainty in AI and Machine Learning conferences and in several statistics conferences. Journals specific to neural networks include Neural Computation, Neural Networks, and the IEEE Transactions on Neural Networks.


1. Duda, R. O. and Hart, P. E. (1973). Pattern classification and scene analysis. Wiley.
2. Domingos, P. and Pazzani, M. (1997). On the optimality of the simple Bayesian classifier under zero-one loss. Machine Learning, 29, 103–30.
3. Elkan, C. (1997). Boosting and naive Bayesian learning. Tech. rep., Department of Computer Science
and Engineering, University of California, San Diego.
4. Heckerman, D. (1998). A tutorial on learning with Bayesian networks. In Jordan, M. I. (Ed.), Learning in graphical models. Kluwer.
5. Spiegelhalter, D. J., Dawid, A. P., Lauritzen, S., and Cowell, R. (1993). Bayesian analysis in expert systems. Statistical Science, 8, 219–282.
6. Gilks, W. R., Thomas, A., and Spiegelhalter, D. J. (1994). A language and program for complex
Bayesian modelling. The Statistician, 43, 169–178.
7. Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann.
8. Pearl, J. and Verma, T. (1991). A theory of inferred causation. In KR-91, pp. 441–452.
9. Spirtes, P., Glymour, C., and Scheines, R. (1993). Causation, prediction, and search. Springer-Verlag.
10. Cheng, J., Greiner, R., Kelly, J., Bell, D. A., and Liu, W. (2002). Learning Bayesian networks from data: An information-theory based approach. AIJ, 137, 43–90.
11. Cooper, G. and Herskovits, E. (1992). A Bayesian method for the induction of probabilistic networks from data. Machine Learning, 9, 309–347.
12. Heckerman, D., Geiger, D., and Chickering, D. M. (1994). Learning Bayesian networks: The combination of knowledge and statistical data. Technical report MSR-TR-94-09, Microsoft Research.
13. Moore, A. and Wong, W.-K. (2003). Optimal reinsertion: A new search operator for accelerated and more accurate Bayesian network structure learning. In ICML-03.
14. Teyssier, M. and Koller, D. (2005). Ordering-based search: A simple and effective algorithm for learning Bayesian networks. In UAI-05, pp. 584–590.
15. Moore, A. W. and Lee, M. S. (1997). Cached sufficient statistics for efficient machine learning with large datasets. JAIR, 8, 67–91.
16. Friedman, N. and Goldszmidt, M. (1996). Learning Bayesian networks with local structure. In UAI-96, pp. 252–262.
17. Hartley, H. (1958). Maximum likelihood estimation from incomplete data. Biometrics, 14, 174–194.
18. Baum, L. E. and Petrie, T. (1966). Statistical inference for probabilistic functions of finite state
Markov chains. Annals of Mathematical Statistics, 41.
19. Dempster, A. P., Laird, N., and Rubin, D. (1977). Maximum likelihood from incomplete data via the
EM algorithm. J. Royal Statistical Society, 39 (Series B), 1–38.
20. McLachlan, G. J. and Krishnan, T. (1997). The EM Algorithm and Extensions. Wiley.
21. Titterington, D. M., Smith, A. F. M., and Makov, U. E. (1985). Statistical analysis of finite mixture distributions. Wiley.
22. Cheeseman, P., Self, M., Kelly, J., and Stutz, J. (1988). Bayesian classification. In AAAI-88, Vol. 2,
pp. 607–611.
23. Cheeseman, P. and Stutz, J. (1996). Bayesian classification (AutoClass): Theory and results. In Fayyad, U., Piatesky-Shapiro, G., Smyth, P., and Uthurusamy, R. (Eds.), Advances in Knowledge Discovery and Data Mining. AAAI Press/MIT Press.
24. Goebel, J., Volk, K., Walker, H., and Gerbault, F. (1989). Automatic classification of spectra from the infrared astronomical satellite (IRAS). Astronomy and Astrophysics, 222, L5–L8.
25. Hunter, L. and States, D. J. (1992). Bayesian classification of protein structure. IEEE Expert, 7(4),
67–75.
26. Lauritzen, S. (1995). The EM algorithm for graphical association models with missing data. Computational Statistics and Data Analysis, 19, 191–201.
27. Russell, S. J., Binder, J., Koller, D., and Kanazawa, K. (1995). Local learning in probabilistic networks with hidden variables. In IJCAI-95, pp. 1146–52.
28. Binder, J., Koller, D., Russell, S. J., and Kanazawa, K. (1997a). Adaptive probabilistic networks with hidden variables. Machine Learning, 29, 213–244.
29. Friedman, N. (1998). The Bayesian structural EM algorithm. In UAI-98.
30. Friedman, N. and Koller, D. (2003). Being Bayesian about Bayesian network structure: A Bayesian approach to structure discovery in Bayesian networks. Machine Learning, 50, 95–125.
31. Pearl, J. (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press.
32. DeGroot, M. H. (1970). Optimal Statistical Decisions. McGraw-Hill.
33. Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis. Springer Verlag.
34. Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. (1995). Bayesian Data Analysis. Chapman & Hall.
35. Bishop, C. M. (2007). Pattern Recognition and Machine Learning. Springer-Verlag.
36. Hastie, T., Tibshirani, R., and Friedman, J. (2009). The Elements of Statistical Learning: Data Mining,
Inference and Prediction (2nd edition). Springer- Verlag.
37. Duda, R. O., Hart, P. E., and Stork, D. G. (2001). Pattern Classification (2nd edition). Wiley.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Structures Bourbaki Thiel I 270
Bourbaki speaks of a reordering of the total area of mathematics according to "mother structures". In modern mathematics, abstractions, especially structures, are understood as equivalence classes and thus as sets. >Sets, >Set theory, >Structures/Mathematics, >Abstracta, >Mathematical entities, >Equivalence classes.
Thiel I 307
Bourbaki opposes the "modern" structures to the classical "disciplines". The theory of the primes is closely related to the theory of algebraic curves. >Primes.
The Euclidean geometry borders the theory of integral equations. The principle of organization will be one of the hierarchies of structures that goes from simple to complex and from general to particular.
>Geometry.


T I
Chr. Thiel
Philosophie und Mathematik Darmstadt 1995
Time Peacocke I 162
Time / Peacocke: ordering of thoughts of basic for the understanding of time - not vice versa - no underlying date system. Cf. >Apprehension, >Apperception, >Thinking, >World/Thinking.
E.g. when I remember, that yesterday the interest rates have fallen, then this does not apply because of a property or identity that is about "yesterday".
>Time, >Past, >Present, >Future

Peacocke I
Chr. R. Peacocke
Sense and Content Oxford 1983

Peacocke II
Christopher Peacocke
"Truth Definitions and Actual Languges"
In
Truth and Meaning, G. Evans/J. McDowell Oxford 1976


The author or concept searched is found in the following controversies.
Disputed term/author/ism Author Vs Author
Entry
Reference
Sense Data Goodman Vs Sense Data IV 17
It is sometimes assumed that sensations are given primarily. The difficulty with this way of speaking is that neither a sensation nor anything else is untagged. Thinking is actively involved in the perception. It forces ordering on us and makes ordering identifiable.

G IV
N. Goodman
Catherine Z. Elgin
Reconceptions in Philosophy and Other Arts and Sciences, Indianapolis 1988
German Edition:
Revisionen Frankfurt 1989

Goodman I
N. Goodman
Ways of Worldmaking, Indianapolis/Cambridge 1978
German Edition:
Weisen der Welterzeugung Frankfurt 1984

Goodman II
N. Goodman
Fact, Fiction and Forecast, New York 1982
German Edition:
Tatsache Fiktion Voraussage Frankfurt 1988

Goodman III
N. Goodman
Languages of Art. An Approach to a Theory of Symbols, Indianapolis 1976
German Edition:
Sprachen der Kunst Frankfurt 1997

The author or concept searched is found in the following theses of the more related field of specialization.
Disputed term/author/ism Author
Entry
Reference
Ordering Kauffman, St. I 9
Ordering / human / Kauffman: thesis: that natural selection did not design us alone, the primary source of order is the self-organization. The complex whole can in a completely unmystical sense be "emergent" and show features which are lawful in themselves.
  I 21
Man then appears not as a product of random events, but as a result of an inevitable development!
I 229
Order / Kauffman: Thesis: possible even without selection. Today we need a new theoretical framework model.

The author or concept searched is found in the following theses of an allied field of specialization.
Disputed term/author/ism Author
Entry
Reference
Ordering Foucault, M. Habermas I 304
Foucault / order of things: thesis: representation provides only its order, it is not pre-sorted.

Ha I
J. Habermas
Der philosophische Diskurs der Moderne Frankfurt 1988

Ha III
Jürgen Habermas
Theorie des kommunikativen Handelns Bd. I Frankfurt/M. 1981

Ha IV
Jürgen Habermas
Theorie des kommunikativen Handelns Bd. II Frankfurt/M. 1981