Dictionary of Arguments


Philosophical and Scientific Issues in Dispute
 
[german]

Screenshot Tabelle Begriffes

 

Find counter arguments by entering NameVs… or …VsName.

Enhanced Search:
Search term 1: Author or Term Search term 2: Author or Term


together with


The author or concept searched is found in the following 21 entries.
Disputed term/author/ism Author
Entry
Reference
Calculus Hoyningen-Huene HH 257
Proof Theory/Hoyningen-Huene: here the abstraction trend is driven even further than in the model theory and also for the definition of the metalogical terms to abstract the meaning of connectives , it will proceed purely syntactic. A calculus is nothing more than a system of production rules for printing images. > Uninterpreted formal system. - The calculi differ in their use of the operators.
---
HH 270
Calculus/Hoyningen-Huene: progress for the decision problem: for the problem solution one can accurately construct the calculi that are adequate. - There are calculi, which produce exactly those print images that are identical with print images of universal problem solution formulas. - The adequacy of the calculus only says: if the formula is universally valid, then there is a proof in the calculus.

Church-Turing Thesis Lorenzen Berka I 266
Church thesis/Lorenzen: the thesis is an equation of "constructive" with "recursive". >Constructivism, >Recursion, >Recursivity.
LorenzenVsChurch: this is a too narrow view: thus it no longer permits the free use of the quantification over the natural numbers.
>Quantification, >Numbers, >Infinity.
I 267
Decision-making problem/ChurchVsLorenzen: (according to Lorenzen): Advantage: greater clarity: when limiting to recursive statements, there can never be a dispute as to whether one of the admitted statements is true or false. The definition of recursiveness guarantees precisely the decision-definition, that is, the existence of a decision-making process. >Decidability, >decision problem.(1)

1. P. Lorenzen, Ein dialogisches Konstruktivitätskriterium, in: Infinitistic Methods, (1961), 193-200

Lorn I
P. Lorenzen
Constructive Philosophy Cambridge 1987


Berka I
Karel Berka
Lothar Kreiser
Logik Texte Berlin 1983
Complexes/Complexity Chaitin Barrow I 78
Complexity/Decidability/Paradox/Chaitin/Barrow: Order: "Print a sequence whose complexity can be proved to be greater than the length of this program!". The computer cannot respond to this. Each sequence that it generates must be of lesser complexity than the length of the sequence itself (and also of its program).
(Neumann: a machine can only build another machine if it is one degree less complex than this one itself. (Kursbuch 8, 139 ff)(1).
>J.v. Neumann.
In the above case, the computer cannot decide whether the number R is random or not. Thus the Goedel theorem is proved.
>Decisions, >Decidability, >Decision theory, >Decision-making process, >K. Gödel.
In the late 1980s, even simpler evidence was found for the Goedel theorem, with which it was transformed into statements about information and randomness.
Information content/Barrow: You can assign a certain amount of information to a system of axioms and rules by defining their information content as the size of the computer program that checks all the possible concluding chains.
I 78/79
If one attempts to extend the bounds of provability by new axioms, there are still larger numbers, or sequences of numbers, whose randomness remains unprovable. Chaitin: he has proved with the Diophantic equation:

X + y² = q
If we look for solutions with positive integers for x and y, Chaitin asked,...
I 80
...whether such an equation is typically finite or has infinitely many integral solutions if we let q pass through all possible values q = 1,2,3,4 .... At first sight it hardly deviates from the original question, whether the equation for
Q = 1,2,3 .. has an integer solution.
However, Chaitin's question is infinitely more difficult to answer. The answer is random in the sense that it requires more information than is given in the problem.
There is no way to a solution. Write for q 0 if the equation has only finitely many solutions, and 1, if there are infinitely many. The result is a series of ones and zeros representing a real number.
Their value cannot be calculated by any computer.
The individual spots are logically completely independent of each other.
omega = 0010010101001011010 ...
Then Chaitin transformed this number into a decimal number...
I 81
...omega = 0.0010010101001011010 ... and thus had the degree of probability that a randomly chosen computer program would eventually stop after a finite number of steps. It is always not equal to 0 and 1.
Still another important consequence: if we choose any very large number for q, there is no way to decide whether the qth binary digit of the number omega is a zero or a one. Human thinking has no access to an answer to this question.
The inevitable undecidability of some statements follows from the low complexity of the computer program, which is based on arithmetic, however.
>Decision problem, >Software, >Computer programming.

1. Kursbuch 8: Mathematik. H. M. Enzensberger (Hg.), Frankfurt/M. 1967.


B I
John D. Barrow
Warum die Welt mathematisch ist Frankfurt/M. 1996

B II
John D. Barrow
The World Within the World, Oxford/New York 1988
German Edition:
Die Natur der Natur: Wissen an den Grenzen von Raum und Zeit Heidelberg 1993

B III
John D. Barrow
Impossibility. The Limits of Science and the Science of Limits, Oxford/New York 1998
German Edition:
Die Entdeckung des Unmöglichen. Forschung an den Grenzen des Wissens Heidelberg 2001
Decidability Chaitin Genz II 217
Calculability/Chaitin: Thesis: Mathematics and physics are connected. If physics should lead to a non-calculable number, we would have to change our concept of calculability. >Calculability, >Non-calculability, >Decisions, >Decidability, >Decision theory, >Decision-making process, >Decision problem, >Mathematics,
>Physics.


Gz I
H. Genz
Gedankenexperimente Weinheim 1999

Gz II
Henning Genz
Wie die Naturgesetze Wirklichkeit schaffen. Über Physik und Realität München 2002
Decidability Genz II 206
Compressibility/Decidability/Genz: there can be no computer program that decides if any amount of data is compressible. Stronger: there is no way to prove that it is not compressible.
Compressibility: can be proven but not refuted.
II 207
Example number pi: π can be generated by a finite program. There are numbers that cannot be calculated in principle:
Omega/Chaitin/Genz: this is what Chaitin calls a certain number of which not a single digit can be calculated. It is not accessible to any rule, it is outside mathematics.
>Gregory Chaitin.
II 218
Decidability/calculability/undecidable/non-calculable/Genz: non-calculable numbers are actually the same as non-decidable numbers. Incalculability/physics/quantum cosmology/Genz: apparent indecidability: the ... of the wave-function of the universe shows apparant indecidability. It deals with the possible geometry of three-dimensional spaces.
>Wave function.
Simplified: e. g. a circle (one dimensional): to calculate the wave function of the universe for the circle as an argument: the wave function can be represented as a sum of summands, where there is a series of handleless cups, one series of cups with a handle, one series of cups with two handles, etc., whereby the handles can be shaped differently in each case. These represent four-dimensional spaces (with time as 4th dimension).
Circle: here time is added as the 2nd dimension. Together they form the two dimensions of the cup surfaces.
II 219
3rd dimension: the 3rd dimension in which the surfaces are embedded, serves only as an illustration. It has no equivalent in reality. Problem: it is not possible to decide which cups are to be regarded as the same, which cups are to be regarded as different (cups with differently shaped handles have the same topology).
Question: undecidable: whether two cups have the same or different number of handles. (Of course, this is about four, not two dimensions.)
Indecidability/Genz: indecidability occurs here only if a computer is to perform the calculation: to describe a cup, it is covered with a certain number of equal triangles.
Problem: there cannot be a computer program that decides for any number of covering flat triangles whether two (four-dimensional) cups have the same number of handles.
II 220
Theorem: the theorem is rather tame: it now excludes that a program makes a decision for any number of flat triangles, but not for a given number - e. g. one million - flat triangles. This is simply a matter of increasing accuracy. That would be an example of an unpredictable number.
Wave function of the Universe/Genz: it could be shown that there are calculable representations of it, so that its incalculability (similar to that of > NOPE) suggested by the regulation of the figure does not actually exist.
Definition NOPE/Genz: the smallest number that can only be determined by more than thirteen words minus the smallest number that can only be determined by more than thirteen words
N.B.: the rule is impracticable, but we still know that NOPE = 0!
II 223
Problem/Genz: there cannot be a program that decides in finite time if any program ever stops. "Stopping problem"/"Non-stopping theorem"/Genz: the "stopping problem" is not a logical but a physical problem. It is impossible to perform infinitely many logical steps in finite time.
Time travel/time reversal/time/decision problem/Genz: if time travel were possible, the stopping problem would only be valid to a limited extent.
>Time, >Time reversal, >Time arrow, >Symmetries.
II 224
Stopping problem/Platonism/Genz: in a platonic world where there are only logical steps instead of time, the non-stopping theorem would also be valid. The point here is the admissibility of evidence rather than its feasibility. >Proofs, >Provability.

Gz I
H. Genz
Gedankenexperimente Weinheim 1999

Gz II
Henning Genz
Wie die Naturgesetze Wirklichkeit schaffen. Über Physik und Realität München 2002

Decidability Hintikka I 7
Standard Semantics/Kripke Semantics/Hintikka: what differences are there? The ditch between them is much deeper than it first appears.
Cocchiarella: Cocchiarella has shown, however, that even in the simplest quantifying case, of the monadic predicate logic, the standard logic is radically different from its Kripkean cousin.
Decidability: monadic predicate logic is, as Kripke has shown, decidable.
Kripke semantics: Kripke semantics is undecidable.
Decidability: decidability implies axiomatizability.
I 208
Decision Problem/predicate calculus/Hao Wang: thesis: the problem corresponds to the task of completely filling the Euclidean surface with square dominoes of different sizes. At least one stone of each size must be used.
E.g. logical omniscience now comes in in the following way:
At certain points I can truthfully say according to my perception:
(5) I see that this Domino task is impossible to solve.
In other cases, I cannot say that truthfully.
>Logical omniscience.
Problem/HintikkaVsBarwise/HintikkaVsSituation Semantics/Hintikka: according to Barwise/Perry, it should be true of any unsolvable Domino problem that I see the unsolvability immediately as soon as I see the forms of available stones because the unsolvability follows logically from the visual information.
Solution/semantics of possible worlds/Hintikka: according to the urn model there is no problem.
>Possible world semantics.
I 209
Omniscience/symmetry/Hintikka: situational semantics: situational semantics needs the urn model to solve the second problem of logical omniscience. Semantics of possible worlds: on the other hand, it needs situational semantics itself to solve the first problem.
>Situation semantics.

Hintikka I
Jaakko Hintikka
Merrill B. Hintikka
Investigating Wittgenstein
German Edition:
Untersuchungen zu Wittgenstein Frankfurt 1996

Hintikka II
Jaakko Hintikka
Merrill B. Hintikka
The Logic of Epistemology and the Epistemology of Logic Dordrecht 1989

Decidability Leibniz Berka I 329
Decision problem/Logic/Berka: appeared historically for the first time in Leibniz with the idea of a purely arithmetical "ars iudicandi". Behmann: (1922)(1): "The main problem of modern logic".
Ackermann: (1954)(2):
I. It is to be decided with exactly stated means, whether a relevant formula of a (logical) calculus is valid.
II. If it is not universal, it is to be decided whether it is valid in none of the areas or whether it is valid in an area. If it is valid in any area, one must determine which cardinal number this area has.
III. It is to be decided whether a relevant formula is valid in all areas with a finite number of elements or not."
Berka: this is a basically semantic formulation of the E problem.
E Problem/syntactical: it is to be decided with the help of exactly defined processes that have to fulfill certain conditions whether a relevant formula of a calculus is provable or refutable.
Statement Calculus/E-Problem: by Lukasiewicz (1921)(3), Post (1921)(4), Wittgenstein (1921)(5) positively solved.


1. H. Behmann, Beiträge zur Algebra der Logik, insbesondere zum Entscheidungsproblem, Math. Ann. 86 (1922), 163-229
2. R. Ackermann, Solvable Classes of the Decision Problem, Amsterdam (3. ed.) 1968
3. J. Lukasiewicz, Logica dwuwartosciowa, PF 23 (1921), 189-205
4. E. L. Post, Introduction to a general theory of elemantary propositions, American Journal of Mathematics 43 (1921) , 163-185
5. L. Wittgenstein, Logisch-Philosophische Abhandlung, Ann. Naturphil. 14 (1921), 185-262

Lei II
G. W. Leibniz
Philosophical Texts (Oxford Philosophical Texts) Oxford 1998


Berka I
Karel Berka
Lothar Kreiser
Logik Texte Berlin 1983
Decidability Logic Texts Hoyningen-Huene II 227
Decidability/undecidability/decision problem: propositional logic: is decidable and complete. Predicate logic: undecidable.
There is no mechanical method by which for any predicate-logical formula, the decision can be brought about whether it is universally valid or not.
>Validity, >Proof.
Logic Texts
Me I Albert Menne Folgerichtig Denken Darmstadt 1988
HH II Hoyningen-Huene Formale Logik, Stuttgart 1998
Re III Stephen Read Philosophie der Logik Hamburg 1997
Sal IV Wesley C. Salmon Logic, Englewood Cliffs, New Jersey 1973 - German: Logik Stuttgart 1983
Sai V R.M.Sainsbury Paradoxes, Cambridge/New York/Melbourne 1995 - German: Paradoxien Stuttgart 2001
Decidability Lorenzen Berka I 267
Decision problem/recursion/recursiveness/dialogical logic/Lorenzen: if R(x, y) is a decision-definite statement form, (Ex) R(x,y) no longer needs to be decision-definite. Nevertheless, on the other hand, the assertion of such statements as

(1) (Ex) R(x,n)
does not need to trigger a senseless dispute!
It is obvious, then, to agree that the person who claims (1) is also obliged to give a number m, so that (2) R (m, n) is true. If he cannot do this, he has "lost" his claim.(1)
>Dialogical logic/Lorenzen.

1. P. Lorenzen, Ein dialogisches Konstruktivitätskriterium, in: Infinitistic Methods, (1961), 193-200

Lorn I
P. Lorenzen
Constructive Philosophy Cambridge 1987


Berka I
Karel Berka
Lothar Kreiser
Logik Texte Berlin 1983
Decision Networks Norvig Norvig I 626
Decision Networks/influence diagrams/AI research/Norvig/Russell: Decision networks combine Bayesian networks with additional node types for actions and utilities. (Cf. Howard and Matheson, 1984(2)). In its most general form, a decision network represents information about the agent’s current state, its ossible actions, the state that will result from the agent’s action, and the utility of that state. It therefore provides a substrate for implementing utility-based agents (…). E.g. the problem of the siting of an airport (>Multi-attribute utility/AI Research). Chance nodes: (…) represent random variables, just as they do in Bayesian networks. The agent could be uncertain about the construction cost, the level of air traffic and the potential for litigation, (…). Each chance node has associated with it a conditional distribution that is indexed by the state of the parent nodes. In decision networks, the parent nodes can include decision nodes as well as chance nodes.
Decision nodes: (…) represent points where the decision maker has a choice of
Norvig I 627
actions. The choice influences the cost, safety, and noise that will result. Utility nodes/value nodes: (…) represent the agent’s utility function. The utility node has as parents all variables describing the outcome that directly affect utility. Associated with the utility node is a description of the agent’s utility as a function of the parent attributes. >Information value/Norvig.
Norvig I 638
Decision theory has been a standard tool in economics, finance, and management science since the 1950s. Until the 1980s, decision trees were the main tool used for representing simple decision problems. Smith (1988)(1) gives an overview of the methodology of decision analysis. Influence diagrams were introduced by Howard and Matheson (1984)(2), based on earlier work at SRI (Miller et al., 1976)(3). Howard and Matheson’s method involved the
Norvig I 639
derivation of a decision tree from a decision network, but in general the tree is of exponential size. Shachter (1986)(4) developed a method for making decisions based directly on a decision network, without the creation of an intermediate decision tree. This algorithm was also one of the first to provide complete inference for multiply connected Bayesian networks. Zhang et al. (1994)(5) showed how to take advantage of conditional independence of information to reduce the size of trees in practice; they use the term decision network for networks that use this approach (although others use it as a synonym for influence diagram). Nilsson and Lauritzen (2000)(6) link algorithms for decision networks to ongoing developments in clustering algorithms for Bayesian networks. Koller and Milch (2003)(7) show how influence diagrams can be used to solve games that involve gathering information by opposing players, and Detwarasiti and Shachter (2005)(8) show how influence diagrams can be used as an aid to decision making for a team that shares goals but is unable to share all information perfectly. The collection by Oliver and Smith (1990)(9) has a number of useful articles on decision networks, as does the 1990 special issue of the journal Networks.

1. Smith, J. Q. (1988). Decision Analysis. Chapman and Hall.
2. Howard, R. A. and Matheson, J. E. (1984). Influence diagrams. In Howard, R. A. and Matheson,
J. E. (Eds.), Readings on the Principles and Applications of Decision Analysis, pp. 721–762. Strategic
Decisions Group.
3. Miller, A. C., Merkhofer, M. M., Howard, R. A., Matheson, J. E., and Rice, T. R. (1976). Development of automated aids for decision analysis. Technical report, SRI International.
4. Shachter, R. D. (1986). Evaluating influence diagrams. Operations Research, 34, 871–882.
5. Zhang, N. L., Qi, R., and Poole, D. (1994). A computational theory of decision networks. IJAR, 11,
83–158.
6. Nilsson, D. and Lauritzen, S. (2000). Evaluating influence diagrams using LIMIDs. In UAI-00, pp. 436–445. 7. Koller, D. and Milch, B. (2003). Multi-agent influence diagrams for representing and solving games.
Games and Economic Behavior, 45, 181–221.
8. Detwarasiti, A. and Shachter, R. D. (2005). Influence diagrams for team decision analysis. Decision
Analysis, 2(4), 207–228.
9. Oliver, R. M. and Smith, J. Q. (Eds.). (1990). Influence Diagrams, Belief Nets and Decision Analysis.
Wiley.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Decision Theory AI Research Norvig I 638
Decision theory/AI research/Norvig/Russell: Decision theory has been a standard tool in economics, finance, and management science since the 1950s. Until the 1980s, decision trees were the main tool used for representing simple decision problems. Smith (1988)(1) gives an overview of the methodology of decision analysis. Influence diagrams were introduced by Howard and Matheson (1984)(2), based on earlier work at SRI (Miller et al., 1976)(3). Howard and Matheson’s method involved the
Norvig I 639
derivation of a decision tree from a decision network, but in general the tree is of exponential size. Shachter (1986)(4) developed a method for making decisions based directly on a decision network, without the creation of an intermediate decision tree. This algorithm was also one of the first to provide complete inference for multiply connected Bayesian networks. Zhang et al. (1994)(5) showed how to take advantage of conditional independence of information to reduce the size of trees in practice; they use the term decision network for networks that use this approach (although others use it as a synonym for influence diagram). Nilsson and Lauritzen (2000)(6) link algorithms for decision networks to ongoing developments in clustering algorithms for Bayesian networks. Koller and Milch (2003)(7) show how influence diagrams can be used to solve games that involve gathering information by opposing players, and Detwarasiti and Shachter (2005)(8) show how influence diagrams can be used as an aid to decision making for a team that shares goals but is unable to share all information perfectly. The collection by Oliver and Smith (1990)(9) has a number of useful articles on decision networks, as does the 1990 special issue of the journal Networks. >Decision networks/Norvig.
Norvig I 639
Surprisingly few early AI researchers adopted decision-theoretic tools after the early applications in medical decision (…). One of the few exceptions was Jerry Feldman, who applied decision theory to problems in vision (Feldman and Yakimovsky, 1974)(10) and planning (Feldman and Sproull, 1977)(11). After the resurgence of interest in probabilistic methods in AI in the 1980s, decision-theoretic expert systems gained widespread acceptance (Horvitz et al., 1988(12); Cowell et al., 2002)(13). >Expert systems/Norvig.
1. Smith, J. Q. (1988). Decision Analysis. Chapman and Hall.
2. Howard, R. A. and Matheson, J. E. (1984). Influence diagrams. In Howard, R. A. and Matheson,
J. E. (Eds.), Readings on the Principles and Applications of Decision Analysis, pp. 721–762. Strategic
Decisions Group.
3. Miller, A. C., Merkhofer, M. M., Howard, R. A., Matheson, J. E., and Rice, T. R. (1976). Development of automated aids for decision analysis. Technical report, SRI International.
4. Shachter, R. D. (1986). Evaluating influence diagrams. Operations Research, 34, 871–882.
5. Zhang, N. L., Qi, R., and Poole, D. (1994). A computational theory of decision networks. IJAR, 11,
83–158.
6. Nilsson, D. and Lauritzen, S. (2000). Evaluating influence diagrams using LIMIDs. In UAI-00, pp. 436–445. 7. Koller, D. and Milch, B. (2003). Multi-agent influence diagrams for representing and solving games.
Games and Economic Behavior, 45, 181–221.
8. Detwarasiti, A. and Shachter, R. D. (2005). Influence diagrams for team decision analysis. Decision
Analysis, 2(4), 207–228.
9. Oliver, R. M. and Smith, J. Q. (Eds.). (1990). Influence Diagrams, Belief Nets and Decision Analysis.
Wiley.
10. Feldman, J. and Yakimovsky, Y. (1974). Decision theory and artificial intelligence I: Semantics-based region analyzer. AIJ, 5(4), 349–371.
11. Feldman, J. and Sproull, R. F. (1977). Decision theory and artificial intelligence II: The hungry monkey.
Technical report, Computer Science Department, University of Rochester.
12. Horvitz, E. J., Breese, J. S., and Henrion, M. (1988). Decision theory in expert systems and artificial intelligence. IJAR, 2, 247–302.
13. Cowell, R., Dawid, A. P., Lauritzen, S., and Spiegelhalter, D. J. (2002). Probabilistic Networks and Expert Systems. Springer.


Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010
Dempster-Shafer Theory Norvig Norvig I 547
Dempster-Shafer Theory/AI Research/Norvig/Russell: uses interval-valued degrees of belief to represent an agent’s knowledge of the probability of a proposition.
Norvig I 549
The Dempster–Shafer theory is designed to deal with the distinction between uncertainty and ignorance. Rather than computing the probability of a proposition, it computes the probability that the evidence supports the proposition. This measure of belief is called a belief function, written Bel(X). The mathematical underpinnings of Dempster–Shafer theory have a similar flavor to those of probability theory; the main difference is that, instead of assigning probabilities to possible worlds, the theory assigns masses to sets of possible world, that is, to events. The masses still must add to 1 over all possible events. Bel(A) is defined to be the sum of masses for all events that are subsets of (i.e., that entail) A, including A itself. With this definition, Bel(A) and Bel(¬A) sum to at most 1, and the gap—the interval between Bel(A) and 1 − Bel(¬A)—is often interpreted as bounding the probability of A.

VsDempster-Shafer theory: Problems: As with default reasoning, there is a problem in connecting beliefs to actions. Whenever there is a gap in the beliefs, then a decision problem can be defined such that a Dempster–Shafer system is unable to make a decision. In fact, the notion of utility in the Dempster–Shafer model is not yet well understood because the meanings of masses and beliefs themselves have yet to be understood. Pearl (1988)(1) has argued that Bel(A) should be interpreted not as a degree of belief in A but as the probability assigned to all the possible worlds (now interpreted as logical theories) in which A is provable. While there are cases in which this quantity might be of interest, it is not the same as the probability that A is true. A Bayesian analysis of the coin-flipping example would suggest that no new formalism is necessary to handle such cases. The model would have two variables: the Bias of the coin (a number between 0 and 1, where 0 is a coin that always shows tails and 1 a coin that always shows heads) and the outcome of the next Flip. Cf. >Fuzzy Logic, >Vagueness/Philosophical theories, >Sorites/Philosophical theories.


1. Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Dempster-Shafer Theory Russell Norvig I 547
Dempster-Shafer Theory/AI Research/Norvig/Russell: uses interval-valued degrees of belief to represent an agent’s knowledge of the probability of a proposition.
Norvig I 549
The Dempster–Shafer theory is designed to deal with the distinction between uncertainty and ignorance. Rather than computing the probability of a proposition, it computes the probability that the evidence supports the proposition. This measure of belief is called a belief function, written Bel(X). The mathematical underpinnings of Dempster–Shafer theory have a similar flavor to those of probability theory; the main difference is that, instead of assigning probabilities to possible worlds, the theory assigns masses to sets of possible world, that is, to events. The masses still must add to 1 over all possible events. Bel(A) is defined to be the sum of masses for all events that are subsets of (i.e., that entail) A, including A itself. With this definition, Bel(A) and Bel(¬A) sum to at most 1, and the gap—the interval between Bel(A) and 1 − Bel(¬A)—is often interpreted as bounding the probability of A.

VsDempster-Shafer theory: Problems: As with default reasoning, there is a problem in connecting beliefs to actions. Whenever there is a gap in the beliefs, then a decision problem can be defined such that a Dempster–Shafer system is unable to make a decision. In fact, the notion of utility in the Dempster–Shafer model is not yet well understood because the meanings of masses and beliefs themselves have yet to be understood. Pearl (1988)(1) has argued that Bel(A) should be interpreted not as a degree of belief in A but as the probability assigned to all the possible worlds (now interpreted as logical theories) in which A is provable. While there are cases in which this quantity might be of interest, it is not the same as the probability that A is true. A Bayesian analysis of the coin-flipping example would suggest that no new formalism is necessary to handle such cases. The model would have two variables: the Bias of the coin (a number between 0 and 1, where 0 is a coin that always shows tails and 1 a coin that always shows heads) and the outcome of the next Flip.
Cf. >Fuzzy Logic, >Vagueness/Philosophical theories, >Sorites/Philosophical theories.

1. Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann.

Russell I
B. Russell/A.N. Whitehead
Principia Mathematica Frankfurt 1986

Russell II
B. Russell
The ABC of Relativity, London 1958, 1969
German Edition:
Das ABC der Relativitätstheorie Frankfurt 1989

Russell IV
B. Russell
The Problems of Philosophy, Oxford 1912
German Edition:
Probleme der Philosophie Frankfurt 1967

Russell VI
B. Russell
"The Philosophy of Logical Atomism", in: B. Russell, Logic and KNowledge, ed. R. Ch. Marsh, London 1956, pp. 200-202
German Edition:
Die Philosophie des logischen Atomismus
In
Eigennamen, U. Wolf (Hg) Frankfurt 1993

Russell VII
B. Russell
On the Nature of Truth and Falsehood, in: B. Russell, The Problems of Philosophy, Oxford 1912 - Dt. "Wahrheit und Falschheit"
In
Wahrheitstheorien, G. Skirbekk (Hg) Frankfurt 1996


Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010
Framing Effect Norvig Norvig I 621
Framing Effect/decisions/AI research/Norvig/Russell: the exact wording of a decision problem can have a big impact on the agent’s choices; this is called the framing effect. Experiments show that people like a medical procedure that it is described as having a “90% survival rate” about twice as much as one described as having a “10% death rate,” even though these two statements mean exactly the same thing. This discrepancy in judgment has been found in multiple experiments and is about the same whether the subjects were patients in a clinic, statistically sophisticated business school students, or experienced doctors. >Ellsberg paradox/Norvig, >Allais paradox/Norvig, >Rationality/AI research, >Preferences/Norvig, >Ambiguity/Kahneman/Tversky, >Anchoring effect/Norvig, >Utility/AI research.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Functional Calculus Berka Berka I 119
Extended function calculus/Hilbert: Extended function calculus is used to express the existence of the opposite of a statement. E.g. For every statement X there is a statement Y, so that at least one and only one is true. This saves the constraint of content representation.
>Formalism, >Statements, >Validity, >Satisfiability.
I 120
Then we can ask for a criterion for the correctness of formulas with arbitrary combinations of all- and existential quantifiers. >Universal quantification, >Existential quantification, >Quantification.
Then there is the principal possibility of decidability about the provability of a mathematical theorem.
>Decidability, >Provability, >Proofs.
Narrow function calculus: The narrow function calculus is sufficient for the formalization of logical reasoning.
>Formalization.
Berka I 337
Function calculus/Hilbert/Ackermann: here (in contrast to the propositional calculus) the decision problem is still unsolved and difficult. - But for certain simple cases a procedure could be given. Simplest case: only function variable with one argument.
>Decision problem, >Propositional calculus.
I 337
Functional calculus: here the following circumstance has to be considered in particular: the generality or satisfiability of a logical expression may depend on how large the number of objects in the individual domain is. >Individual domain, >Domain.

Berka I
Karel Berka
Lothar Kreiser
Logik Texte Berlin 1983

Multi-attribute Utility AI Research Norvig I 622
Multi-attibute Utility /AI research/Norvig/Russell: Decision making in the field of public policy involves high stakes, in both money and lives. For example (…) [s]iting a new airport requires consideration of the disruption caused by construction; the cost of land; the distance from centers of population; the noise of flight operations; safety issues arising from local topography and weather conditions; and so on. Problems like these, in which outcomes are characterized by two or more attributes, are handled by multi-attribute utility theory.
Norvig I 624
Preferences: Suppose we have n attributes, each of which has d distinct possible values. To specify the complete utility function U(x1, . . . , xn), we need dn values in the worst case. Now, the worst case corresponds to a situation in which the agent’s preferences have no regularity at all. Multiattribute utility theory is based on the supposition that the preferences of typical agents have much more structure than that. The basic regularity that arises in deterministic preference structures is called preference independence. Two attributes X1 and X2 are preferentially independent of a third attribute X3 if the preference between outcomes (x1,x2,x3) and (x’1, x’2, x3) does not depend on the particular value x3 for attribute X3. E.g. one may propose that Noise and Cost are preferentially independent
Norvig I 625
deaths. We say that the set of attributes {Noise, Cost ,Deaths} exhibits mutual preferential independence (MPI). MPI says that, whereas each attribute may be important, it does not affect the way in which one trades off the other attributes against each other. Uncertainty: (see Keeney and Raiffa (1976)(1). When uncertainty is present in the domain, we also need to consider the structure of preferences between lotteries and to understand the resulting properties of utility functions, rather than just value functions
Norvig I 626
The basic notion of utility independence extends preference independence to cover lotteries: a set of attributes X is utility independent of a set of attributes Y if preferences between lotteries on the attributes in X are independent of the particular values of the attributes in Y. A set of attributes is mutually utility independent (MUI) if each of its subsets is utility-independent of the remaining attributes. Again, it seems reasonable to propose that the airport attributes are MUI. MUI implies that the agent’s behavior can be described using a multiplicative utility function (Keeney, 1974)(2). >Decision Networks/Norvig, >Information value/Norvig.
Norvig I 638
Keeney and Raiffa (1976)(1) give a thorough introduction to multi-attribute utility theory. They describe early computer implementations of methods for eliciting the necessary parameters for a multi-attribute utility function and include extensive accounts of real applications of the theory. In AI, the principal reference for MAUT is Wellman’s (1985)(3) paper, which includes a system called URP (Utility Reasoning Package) that can use a collection of statements about preference independence and conditional independence to analyze the structure of decision problems.
1. Keeney, R. L. and Raiffa, H. (1976). Decisions with Multiple Objectives: Preferences and Value radeoffs. Wiley.
2. Keeney, R. L. (1974). Multiplicative utility functions. Operations Research, 22, 22–34.
3. Wellman, M. P. (1985). Reasoning about preference models. Technical report MIT/LCS/TR-340, Laboratory for Computer Science, MIT.


Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010
Sequential Decision Making Norvig Norvig I 645
Sequential Decision Making/AI research/Norvig/Russell: [this is about] the computational issues involved in making decisions in a stochastic environment. Sequential decision problems incorporate utilities, uncertainty, and sensing, and include search and planning problems as special cases. >Planning/Norvig, >Decision networks/Norvig, >Decision theory/AI Research, >Utility/AI Research, >Utility theory/Norvig, >Environment/AI research, >Multi-attribute utility theory/AI research.
Norvig I 649
Optimal policy: the optimal policy for a finite horizon is non-stationary. With no fixed time limit, on the other hand, there is no reason to behave differently in the same state at different times. Hence, the optimal action depends only on the current state, and the optimal policy is stationary. States: In the terminology of multi-attribute utility theory, each state si can be viewed as an attribute of the state sequence [s0, s1, s2 . . .]. >Values/AI research.
Norvig I 684
Sequential decision problems in uncertain environments, also called Markov decision processes, or MDPs, are defined by a transition model specifying the probabilistic outcomes of actions and a reward function specifying the reward in each state.
Norvig I 685
Richard Bellman developed the ideas underlying the modern approach to sequential decision problems while working at the RAND Corporation beginning in 1949. (…) Bellman’s book, Dynamic Programming (1957)(1), gave the new field a solid foundation and introduced the basic algorithmic approaches. Ron Howard’s Ph.D. thesis (1960)(2) introduced policy iteration and the idea of average reward for solving infinite-horizon problems. Several additional results were introduced by Bellman and Dreyfus (1962)(3). Modified policy iteration is due to van Nunen (1976)(4) and Puterman and Shin (1978)(5). Asynchronous policy iteration was analyzed by Williams and Baird (1993)(6) (…). The analysis of discounting in terms of stationary preferences is due to Koopmans (1972)(7). The texts by Bertsekas (1987)(8), Puterman (1994)(9), and Bertsekas and Tsitsiklis (1996)(10) provide a rigorous introduction to sequential decision problems. Papadimitriou and Tsitsiklis (1987)(11) describe results on the computational complexity of MDPs. Seminal work by Sutton (1988)(12) and Watkins (1989)(13) on reinforcement learning methods for solving MDPs played a significant role in introducing MDPs into the AI community, as did the later survey by Barto et al. (1995)(14). >Markov Decision Processes/Norvig.


1. Bellman, R. E. (1957). Dynamic Programming. Princeton University Press
2. Howard, R. A. (1960). Dynamic Programming and Markov Processes. MIT Press.
3. Bellman, R. E. and Dreyfus, S. E. (1962). Applied Dynamic Programming. Princeton University Press.
4. van Nunen, J. A. E. E. (1976). A set of successive approximation methods for discounted Markovian decision problems. Zeitschrift fur Operations Research, Serie A, 20(5), 203–208.
5. Puterman, M. L. and Shin, M. C. (1978). Modified policy iteration algorithms for discounted Markov decision problems. Management Science, 24(11), 1127-1137.
6. Williams, R. J. and Baird, L. C. I. (1993). Tight performance bounds on greedy policies based on imperfect value functions. Tech. rep. NU-CCS-93-14, College of Computer Science, Northeastern University.
7. Koopmans, T. C. (1972). Representation of preference orderings over time. In McGuire, C. B. and Radner, R. (Eds.), Decision and Organization. Elsevier/North-Holland.
8. Bertsekas, D. (1987). Dynamic Programming: Deterministic and Stochastic Models. Prentice-Hall.
9. Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley
10. Bertsekas, D. and Tsitsiklis, J. N. (1996). Neurodynamic programming. Athena Scientific.
11. Papadimitriou, C. H. and Tsitsiklis, J. N. (1987). The complexity of Markov decision processes.
Mathematics of Operations Research, 12(3), 441-450.
12. Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning,
3, 9-44.
13. Watkins, C. J. (1989). Models of Delayed Reinforcement Learning. Ph.D. thesis, Psychology Department, Cambridge University.
14. Barto, A. G., Bradtke, S. J., and Singh, S. P. (1995). Learning to act using real-time dynamic programming. AIJ, 73(1), 81-138.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Strength of Theories Hintikka II 7
Standard Semantics/Kripke Semantics/Hintikka: what differences are there? The ditch between standard semantics and Kripke semantics is much deeper than it first appears. Cocchiarella: Cocchiarella has shown, however, that even in the simplest quantifying case of the monadic predicate logic, the standard logic is radically different from its Kripke cousin.
Decidability: monadic predicate logic is, as Kripke has shown, decidable.
Kripke semantics: Kripke semantics is undecidable.
Decisibility: Decisibility implies axiomatizability.
Stronger/weaker/Hintikka: as soon as we go beyond monadic predicate logic, we have a logic of considerable strength, complexity, and unruliness.
Quantified standard modal logic of the 1. level/Hintikka: the quantified standard modal logic of the 1. level is in a sense more powerful than the 2. level logic (with standard semantics). The latter is, of course, already very strong, so that some of the most difficult unresolved logical and quantum-theoretical problems can be expressed in terms of logical truth (or fulfillment) in logical formulas of the second level.
Def equally strong/stronger/weaker/Hintikka: (here): the terms "stronger" and "weaker" are used to show an equally difficult decision-making problem.
Decision problem: the standard logic of the 2. level can be reduced to that for quantified standard modal logic of the 1. level.
Reduction: this reduction is weaker than translatability.
II 9
Quantified standard modal logic of the 1. level/Hintikka: this logic is very strong, comparable in strength with the 2. level logic. It follows that it is not axiomatizable (HintikkaVsKripke). The stronger a logic is, the less manageable it is.
II 28
Branching Quantifiers/stronger/weaker/Hintikka: E.g. branching here:
1. Branch: there is an x and b knows...
2. Branch: b knows there is an x ...
Quantification with branched quantifiers is extremely strong, almost as strong as 2. level logic.
Therefore, it cannot be completely axiomatized (quantified epistemic logic with unlimited independence).
II 29
Variant: variants are simpler cases where the independence refers to ignorance, combined with a move with a single, non-negated operator {b} K. Here, an explicit treatment is possible.
II 118
Seeing/stronger/weaker/logical form/Hintikka: a) stronger: recognizing, recognizing as, seeing as.
b) weaker: to look at, to keep a glance on, etc.
Weaker/logical form/seeing/knowing/Hintikka: e.g.
(Perspective, "Ex")
(15) (Ex) ((x = b) & (Ey) John sees that (x = y)).
(16) (Ex)(x = b & (Ey) John remembers that x = y))
(17) (Ex)(x = b & (Ey) KJohn (x = y))
Acquaintance/N.B.: in (17) b can be John's acquaintance even if John does not know b as b! ((S) because of y).
II 123
Everyday Language/ambiguity/Hintikka: the following expression is ambiguous:
(32) I see d
Stronger: (33) (Ex) I see that (d = x)
That says the same as (31) if the information is visual or weaker:
(34) (Ex) (d = x & (Ey) I see that (x = y))
This is the most natural translation of (32).
Weaker: for the truth of (34) it is enough that my eyes simply rest on the object d. I do not need to recognize it as d.

Hintikka I
Jaakko Hintikka
Merrill B. Hintikka
Investigating Wittgenstein
German Edition:
Untersuchungen zu Wittgenstein Frankfurt 1996

Hintikka II
Jaakko Hintikka
Merrill B. Hintikka
The Logic of Epistemology and the Epistemology of Logic Dordrecht 1989

Universal Validity Gödel Berka I 314
Universal Validity/Goedel: universal validity leads to universal quantification: for formulas with free individual variables A(x,y,...w) this means the general validity of (x)(y)...(w) A(x,y,...w). >Universal quantification, >Quantification, >Existential quantification.
Def Satisfiability/Goedel: "satisfiability" leads to >existence quantification. ((s)"there is a model".)
This is then correspondingly the fulfillability of (Ex)(Ey)...(Ew) A. Then one can say: "A is universally valid" means: "~A is not fulfillable".
>Satisfaction, >Satisfiability.
Refutability: refutability is the provability of negation.
>Negation, >Proofs, >Provability.
I 310
Provability/universal validity/Goedel:... here we have proved the equivalence between "universally valid" and "provable". Over-countable/Goedel: N.B.: this equivalence contains a reduction of the over-countable to the countable for the decision problem because "generally valid" refers to the over-countable totality of the functions, while "provable" presupposes only the countable totality of the proof figures.(1) >Decision problem, >Countability.
1. K. Gödel: Die Vollständighkeit der Axiome des logischen Funktionenkalküls, in: Mh, Math. Phys. 37 (1930), pp. 349-360.

Göd II
Kurt Gödel
Collected Works: Volume II: Publications 1938-1974 Oxford 1990


Berka I
Karel Berka
Lothar Kreiser
Logik Texte Berlin 1983
Values AI Research Norvig I 645
Values/utility/decision theory/AI research/Norvig/Russell: in making decisions in a stochastic environment. Sequential decision problems incorporate utilities, uncertainty, and sensing, and include search and planning problems as special cases.
Norvig I 652
Bellman equation for utilities: (…)there is a direct relationship between the utility of a state and the utility of its neighbors: the utility of a state is the immediate reward for that state plus the expected discounted utility of the next state, assuming that the agent chooses the optimal action. Richard Bellman (1957)(1). The Bellman equation is the basis of the value iteration algorithm for solving MDPs (Markov decision processes). If there are n possible states, then there are n Bellman equations, one for each state. The n equations contain n unknowns - the utilities of the states.
Problem: the equations are nonlinear, because the “max” operator is not a linear operator. Whereas systems of linear equations can be solved quickly using linear algebra techniques, systems of nonlinear equations are more problematic.
Norvig I 654
Value iteration: (…) value iteration eventually converges to a unique set of solutions of the Bellman equations. Contraction: a contraction is a function of one argument that, when applied to two different inputs in turn, produces two output values that are “closer together,” by at least some constant factor, than the original inputs. For example, the function “divide by two” is a contraction, because, after we divide any two numbers by two, their difference is halved. Notice that the “divide by two” function has a fixed point, namely zero that is unchanged by the application of the function.
Norvig I 656
Policy iteration: (…) it is possible to get an optimal policy even when the utility function estimate is inaccurate. If one action is clearly better than all others, then the exact magnitude of the utilities on the states involved need not be precise. The policy iteration algorithm alternates (…) two steps, policy evaluation and policy improvement. The algorithm terminates when the policy improvement step yields no change in the utilities. >Game theory/AI research.

1. Bellman, R. E. (1957). Dynamic Programming. Princeton University Press.


Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010
Values Norvig Norvig I 645
Values/utility/decision theory/AI research/Norvig/Russell: in making decisions in a stochastic environment. Sequential decision problems incorporate utilities, uncertainty, and sensing, and include search and planning problems as special cases.
Norvig I 652
Bellman equation for utilities: (…)there is a direct relationship between the utility of a state and the utility of its neighbors: the utility of a state is the immediate reward for that state plus the expected discounted utility of the next state, assuming that the agent chooses the optimal action. Richard Bellman (1957)(1). The Bellman equation is the basis of the value iteration algorithm for solving MDPs (Markov decision processes). If there are n possible states, then there are n Bellman equations, one for each state. The n equations contain n unknowns - the utilities of the states.
Problem: the equations are nonlinear, because the “max” operator is not a linear operator. Whereas systems of linear equations can be solved quickly using linear algebra techniques, systems of nonlinear equations are more problematic.
Norvig I 654
Value iteration: (…) value iteration eventually converges to a unique set of solutions of the Bellman equations. Contraction: a contraction is a function of one argument that, when applied to two different inputs in turn, produces two output values that are “closer together,” by at least some constant factor, than the original inputs. For example, the function “divide by two” is a contraction, because, after we divide any two numbers by two, their difference is halved. Notice that the “divide by two” function has a fixed point, namely zero that is unchanged by the application of the function.
Norvig I 656
Policy iteration: (…) it is possible to get an optimal policy even when the utility function estimate is inaccurate. If one action is clearly better than all others, then the exact magnitude of the utilities on the states involved need not be precise. The policy iteration algorithm alternates (…) two steps, policy evaluation and policy improvement. The algorithm terminates when the policy improvement step yields no change in the utilities. >Game theory/AI research.

1. Bellman, R. E. (1957). Dynamic Programming. Princeton University Press.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010


The author or concept searched is found in the following 3 controversies.
Disputed term/author/ism Author Vs Author
Entry
Reference
Barwise, J. Hintikka Vs Barwise, J. II 207
Situation Semantics/Barwise/Perry/B/P/Omniscience//Hintikka: how can it solve the problem of logical omniscience?. B/P: bring the following E.g.
(1) a sees how b X-t
therefore (2) a sees how b Y-t
If Xing logically implies to Y. ((s) E.g. walking implies moving).
Problem/(s): from this follows a lot more of which one cannot always assume that a) it is seen, b) that it is known.
Solution/B/P: assume that there are richer and poorer situations and relations between them.
HintikkaVsBarwise/HintikkaVsSituation Semantics/Hintikka: but that’s not a triumph over the possible world semantics, for two reasons:
1) because it is now about the relation fine/coarse (fine-grained/coarse) ((s) of the description), it is nothing with which the semantics of possible worlds has to do.
2) The semantics of possible worlds has solved the problem with Rantala urn models (see above changing possible worlds).
B/P: they consider only cases of omniscience that arise in the wake of the introducing new descriptive terms in the conclusion.
II 208
and go beyond what is mentioned in the premises. Hintikka/Rantala: we both have seen cases that require the introduction of new individuals to ensure the validity of the inference.
E.g.
(3) Robert saw someone giving every boy his own book.
(4) Robert saw every boy as he was given a book by someone.
Question: does (3) logically entail (4)?
Situations Semantics/B/P: according to her it does.
Semantics of Possible Worlds/Hintikka: according to her, it is at least questionable.
Decision Problem/Predicate Calculus/Hao Wang: Thesis: it corresponds to the task of filling out the Euclidean space with square dominoes of different sizes without leaving gaps.
At least one piece of every size must be used.
E.g. logical omniscience: comes in as follows now:
At certain points, I can say truthfully according to my perception:
(5) I see that this domino task is impossible to solve.
In other cases I cannot truthfully say that.
Problem/HintikkaVsBarwise/HintikkaVsSituation Semantics/Hintikka: according to B/P it should be true of any unsolvable domino problem that I see the insolubility as soon as I see the shapes of the available stones, because the insolubility follows logically from the visual information.
Solution/Semantics of Possible Worlds/Hintikka: according to the urn model, there is no problem.
II 209
Omniscience/Symmetry/Hintikka: Situation Semantics: needs the urn model to solve the second problem of logical omniscience Semantics of possible worlds: needs situations semantics in turn to solve the first problem.
II 211
HintikkaVsBarwise/HintikkaVsSituation Semantics/Hintikka: you can find many problems solved with semantics of possible worlds, but not the situation semantics. Opacity/Hintikka: besides the one that is understood as the failure of substitutivity (the identity), there is one that is understood as the failure of the existential generalization (even if it is about non-existence) (see above).
Questions/Hintikka: We still need a semantics for direct questions along with criteria for complete answers. (see below, see above).
Direct object: can also be an event or a particular.
Problem: Questions that contain a (external) quantifier.
Problem: semantics for questions with T-constructions with epistemic verbs.
Question: Why are W-constructions not found under the relevant verbs?.
II 212
HintikkaVsSituation Semantics/HintikkaVsBarwise/Hintikka: Barwise and Perry introduce a "function c" (p 671): this seems obscure: Semantics/Hintikka: intended to provide a model that shows how speakers can refer to anything they want and can mean what they mean.
Function/Semantics of Possible Worlds: here, the speaker or the listener detects a function of possible worlds on speakers.
Situation Semantics/B/P: explains meaning from facts of reference-in-situation: "... a component implicitly represents the connections c between certain words and things in the world in the meaningful use of these words".
HintikkaVsBarwisse/HintikkaVsSituation Semantics/: it should be the reverse: a realistic theory of meaning and reference should show how such a function c is determined by the meanings. For understanding means to detect the meanings c determined.

Hintikka I
Jaakko Hintikka
Merrill B. Hintikka
Investigating Wittgenstein
German Edition:
Untersuchungen zu Wittgenstein Frankfurt 1996

Hintikka II
Jaakko Hintikka
Merrill B. Hintikka
The Logic of Epistemology and the Epistemology of Logic Dordrecht 1989
Church, A. Lorenzen Vs Church, A. Berka I 266
Church thesis/Lorenzen: the thesis is an equating of "constructive" with "recursive". (S) so all structures are recursively possible? Or: there is only one recursive structure. (Slightly different meaning).
LorenzenVsChurch: view to narrow: it allows no longer the free use of the quantification of the natural numbers.
I 267
Decision Problem/ChurchVsLorenzen: (according to Lorenzen): Advantage: greater clarity: when limited to recursive statement forms there can never arise dispute whether one of the approved statements is true or false. The definition of recursivity guarantees precisely the decision definiteness, that means the existence of a decision process.(1)

1. P. Lorenzen, Ein dialogisches Konstruktivitätskriterium, in: Infinitistic Methods, (1961), 193-200

Lorn I
P. Lorenzen
Constructive Philosophy Cambridge 1987

Berka I
Karel Berka
Lothar Kreiser
Logik Texte Berlin 1983
Kripke, S. A. Hintikka Vs Kripke, S. A. II XIII
Possible Worlds/Semantics/Hintikka: the term is misleading. (Began in the late 50s). Kripke Semantics/HintikkaVsKripke: is not a viable model for the theory of logical rules (logical necessity and logical possibility). (Essay 1).
Problem: the correct logic cannot be axiomatized.
Solution: interpreting Kripke semantics as non-standard semantics,
II XIV
in the sense of Henkin’s non-standard interpretation of higher-level logic, while the correct semantics for logical modalities would be analogous to a standard interpretation. Possible Worlds/HintikkaVsQuine: we do not have to give them up entirely, but there will probably never be a complete theory. My theory is related to Kant.
I call them "epistemology of logic".
II XVI
Cross World Identity/Hintikka: Quine: considers it a hopeless problem
HintikkaVsKripke: he underestimates the problem and considers it as guaranteed. He cheats.
World Line/Cross World Identity/Hintikka: 1) We need to allow that some objects in certain possible worlds do not only exist, but that their existence is unthinkable there! I.e. world lines can cease to exist - what is more: it may be that they are not defined in certain possible worlds.
Problem: in the usual knowledge logic (logic of belief) this is not permitted.
2) world lines can be drawn in two ways:
a) object-centered
b) agent-centered. (Essay 8).
Analogy: this can be related to Russell’s distinction between knowledge by acquaintance and by description. (Essay 11)
II 2
Kripke Semantics/Modal Logic/Logical Possibility/Logical Necessity/HintikkaVsKripke/HintikkaVsKripke Semantics: Problem: if we interpreted the operators N, P so that they express logical modalities, they are inadequate: for logical possibility and necessity we need more than an arbitrary selection of possible worlds. We need truth in every logically possible world. But Kripke semantics does not require all such logically possible worlds to be included in the set of alternatives. ((s) I.e. there may be logically possible worlds that are not considered). (see below logical possibility forms the broadest category of options).
Problem: Kripke semantics is therefore inadequate for logical modalities.
Modal Logic/Hintikka: the historically earliest purpose for which it was developed was precisely dealing with logical modalities. This was the purpose for which the Lewis systems were developed.
HintikkaVsKripke: does not only have a skeleton in the closet, but said skeleton haunts the entire house.
Equivalence Relation/Hintikka: if R is required to be reflexive, symmetrical, and transitive, it does not provide the solution: it still does not guarantee that all logically possible worlds are contained in the set. It can (possibly together with with connectedness) only guarantee that w0 has a maximum number of sets as its alternatives that are, so to speak, already in SF.
II 3
KripkeVsVs/Hintikka: It could be argued that this does not yet show that Kripke semantics is wrong. It just needs to be reinforced. E.g. Nino Cocchiarella: Cocchiarella: additional condition: all models (in the usual 1st order sense) with the same domain of individuals do (w0) must occur among the alternative possible worlds to w0. ((s) No new individuals may be added or removed with regard to the original possible world w0).
Hintikka: technically it is of course possible.
"Old": (= Kripke semantics): non-standard semantics.
new: F must include all models that have the same individuals domain do(w0) of well-defined individuals as w0.
Individual/Individuals/Modal/Hintikka: an individual must be well-defined, but it does not have to exist! ((s) I.e. it can be expressed then that it is missing, E.g. the hero has no sister in a possible world).
Domain of Individuals: for each possible world is then a subset of the domain D.
II 4
HintikkaVs: Problem: this is unrealistically interpretative: this flexible approach namely allows non-well-shaped individuals. Then there is no point in asking whether this individual exists or not. Fusion/Fission: a flexible semantics must also allow fission and fusion between one possible world and the another.
Def Well-Defined/Individual/Hintikka: an individual is well-defined, if it can be singled out by name at a node of the world line.
World Line: can link non-existent incarnations of individuals, as long as they are well-defined for all possible worlds in which a node of the world line can be located.
Truth Conditions: are then simple: (Ex) p(x) is true iff there is an individual there, E.g. named z, so that p(z) is true in w.
Modal Semantics/Hintikka. About a so defined (new) semantics a lot can be said:
Kripke Semantics/Hintikka: corresponds to a non-standard semantics, while the "new" semantics (with a fixed domain of individuals) corresponds to a standard semantics. (For higher-order logic).
Standard Semantics/higher level: we get this by demanding that the higher level quantifiers go over all extensionally possible entities of the appropriate logical type (higher than individuals) like quantifiers in the standard semantics for modal logic should go over all extensionally possible worlds.
This is a parallelism that is even stronger than an analogy:
Decision problem: for 2nd order logic this is reduced to the 1st order standard modal logic.
Standard: does the same job in the latter sense as in the former sense.
Quantified 1st Order Standard Modal Logic/Hintikka: all of this leads to this logic being very strong, comparable in strength with 2nd order logic. It follows that it is not axiomatizable. (see above HintikkaVsKripke).
The stronger a logic, the less manageable it is.
II 12
Kripke/Hintikka: has avoided epistemic logic and the logic of propositional attitudes and focuses on pure modalities. Therefore, it is strange that he uses non-standard logic.
But somehow it seems to be clear to him that this is not possible for logical modalities.
Metaphysical Possibility/Kripke/HintikkaVsKripke: has never explained what these mystical possibilities actually are.
II 13
Worse: he has not shown that they are so restrictive that he can use his extremely liberal non-standard semantics.
II 77
Object/Thing/Object/Kripke/Hintikka: Kripke Thesis: the existence of permanent (endurant) objects must simply be provided as a basic concept. HintikkaVsKripke: this requirement is not well founded. Maybe you have to presuppose the criteria of identification and identity only for traditional logic and logical semantics. But that also does not mean that the problem of identification was not an enduring problem for the philosophers.
II 84
KIripkeVsHintikka: Problem: the solutions of these differential equations need not be analytic functions or features that allow an explicit definition of the objects. Hintikka: it seems that Kripke presupposes, however, that you always have to be able to define the relations embodied by the world lines.
HintikkaVsKripke: that is too strict.
World Line: we allow instead that they are implicitly defined by the solutions of the differential equations.
II 86
HintikkaVsKripke: our model makes it possible that we do not necessarily have to presuppose objects as guaranteed like Kripke. ((s) it may be that a curve is not closed in a time section).
II 116
Cross World identity/Rigidity/HintikkaVsKripke: it’s more about the way of identification (public/perspective, see above) than about rigidity or non-rigidity. The manner of identification decides what counts as one and the same individual.
HitikkaVsKripke: his concept of rigidity is silently based on Russell’s concept of the logical proper name. But there is no outstanding class of rigid designation expressions.
Proper Names/Names/HintikkaVsKripke: are not always rigid. E.g. it may be that I do not know to whom the name N.N. refers. Then I have different epistemic alternatives with different references. Therefore, it makes sense to ask "Who is N.N.?".
Public/Perspective/Identification/Russell/Kripke/Hintikka: Russell: focuses on the perspective
II 117
Kripke: on public identification.
II 195
Identity/Individuals/Hintikka: it is much less clear how the identity for certain individuals can fail in the transition to another possible world. I.e. world lines can branch (fission). Separation/KripkeVsFission/SI/Hintikka: Kripke excludes fission, because for him the (SI) applies. A fission, according to him, would violate the transitivity of identity. After a fission, the individuals would by no means be identical, even if it should be after the transitivity. Therefore, for Kripke the (SI) is inviolable.
HintikkaVsKripke: that is circular:
Transitivity of Identity/Hintikka: can mean two things:
a) transitivity within a possible world.
b) between possible worlds.
The plausibility of transitivity is part of the former, not the latter.
To require transitivity of identity between possible worlds simply means to exclude fission. This is what is circular about Kripke’s argument.
II 196
Possible World/Individuals Domain/HintikkaVsKripke: it should not be required that the individuals remain the same when changing from possible world to possible world. Talk about possible worlds is empty if there are no possible experiences that might distinguish them. ((s) is that not possible with a constant domain? Also properties could be partly (not completely) exchanged). Possible World/Hintikka: should best be determined as the associated possible totalities of experience.
And then fission cannot be ruled out.
II 209
Re-Identification/Hintikka: also with this problem situation semantics and possible worlds semantics are sitting in the same boat. Situation semantics: rather obscures the problem. In overlapping situations, E.g. it assumes that the overlapping part remains the same.
Re-Identification/Quine/Hintikka: deems it hopeless, because it is impossible to explain how it works.
Re-Identification/Kripke/Hintikka: Kripke ditto, but that’s why we should simply postulate it, at least for physical objects.
HintikkaVsQuine/HintikkaVsKripke: that is either too pessimistic or too optimistic.
But mistaking the problem would mean to neglect one of the greatest philosophical problems.

Hintikka II
Jaakko Hintikka
Merrill B. Hintikka
The Logic of Epistemology and the Epistemology of Logic Dordrecht 1989