Disputed term/author/ism | Author![]() |
Entry![]() |
Reference![]() |
---|---|---|---|
Analogies | Kauffman | Kau I 424 Analogy / Kauffman: could reflect the laws of chemistry as a formal grammar - accordingly it could then be undecidable whether a particular chemical is synthesized from an original multitude. >Laws, >Laws of nature, >Grammar, >Decidability, >Undecidability, >Possibility. |
Kau II Stuart Kauffman At Home in the Universe: The Search for the Laws of Self-Organization and Complexity New York 1995 Kauffman I St. Kauffman At Home in the Universe, New York 1995 German Edition: Der Öltropfen im Wasser. Chaos, Komplexität, Selbstorganisation in Natur und Gesellschaft München 1998 |
Bivalence | Dummett | II 103 Principle of Bivalence/Truth/Dummett: PoB already presumes the concept of truth. - And that is transcendental in the case of undecidable sentences. - It goes beyond our ability to recognize what a manifestation would be. >Decidability. II 103f Undecidability/anti-realism/Dummett: (without bivalence) The meaning theory will then no longer be purely descriptive in relation to our actual practice. III (a) 17 Sense/Frege: Explanation of sense by truth conditions. - Tractatus: dito: "Under which circumstances...". >Truth conditions, >Circumstances. DummettVsFrege/DummettVsWittgenstein: For that one must already know what the statement that P is true means. Vs: if they then say P is true means the same as asserting P. VsVs: then you must already know what sense it makes to assert P! But that is exactly what should be explained. VsRedundancy theory: we must either supplement it (not merely explain the meaning by assertion and vice versa) or abandon the bivalence. >Redundancy theory. III (b) 74 Sense/Reference/Bivalence/Dummett: bivalence: Problem: not every sentence has such a sense that in principle we can recognize it as true if it is true (e.g. >unicorns, >Goldbach’s conjecture). But Frege’s argument does not depend at all on bivalence. III (b) 76 Bivalence, however, works for elementary clauses: if here the semantic value is the extension, it is not necessary to be possible to decide whether the predicate is true or not - perhaps application cannot be effectively decided, but the (undefined) predicate can be understood without allocating the semantic value (truth value) - therefore distinction between sense and semantic value. >Semantic Value. Cf. >Multi valued logic. |
Dummett I M. Dummett The Origins of the Analytical Philosophy, London 1988 German Edition: Ursprünge der analytischen Philosophie Frankfurt 1992 Dummett II Michael Dummett "What ist a Theory of Meaning?" (ii) In Truth and Meaning, G. Evans/J. McDowell Oxford 1976 Dummett III M. Dummett Wahrheit Stuttgart 1982 Dummett III (a) Michael Dummett "Truth" in: Proceedings of the Aristotelian Society 59 (1959) pp.141-162 In Wahrheit, Michael Dummett Stuttgart 1982 Dummett III (b) Michael Dummett "Frege’s Distiction between Sense and Reference", in: M. Dummett, Truth and Other Enigmas, London 1978, pp. 116-144 In Wahrheit, Stuttgart 1982 Dummett III (c) Michael Dummett "What is a Theory of Meaning?" in: S. Guttenplan (ed.) Mind and Language, Oxford 1975, pp. 97-138 In Wahrheit, Michael Dummett Stuttgart 1982 Dummett III (d) Michael Dummett "Bringing About the Past" in: Philosophical Review 73 (1964) pp.338-359 In Wahrheit, Michael Dummett Stuttgart 1982 Dummett III (e) Michael Dummett "Can Analytical Philosophy be Systematic, and Ought it to be?" in: Hegel-Studien, Beiheft 17 (1977) S. 305-326 In Wahrheit, Michael Dummett Stuttgart 1982 |
Causes | Dummett | III (d) 156 Cause/Dummett: The concept pf a cause is related to our concept of intention. There is a relationship between the causality of a thing and the possibility of using it to produce an effect, in the basic explanation of our acceptance of causal laws. >Causal laws, >Intentions, >Intentionality. III (d) 157 Freedom of action: the idea of freedom of action is necessary for our convictions of causality. Nevertheless, we could also have a concept of causality if we ourselves were not acting, but only observers, such as intelligent trees. In this way we would also perceive the asymmetry. >Causality. II 459 Undecidability/Dummett: A sentence like "Every event has a cause" is undecidable. >Decidability/Dummett. |
Dummett I M. Dummett The Origins of the Analytical Philosophy, London 1988 German Edition: Ursprünge der analytischen Philosophie Frankfurt 1992 Dummett II Michael Dummett "What ist a Theory of Meaning?" (ii) In Truth and Meaning, G. Evans/J. McDowell Oxford 1976 Dummett III M. Dummett Wahrheit Stuttgart 1982 Dummett III (a) Michael Dummett "Truth" in: Proceedings of the Aristotelian Society 59 (1959) pp.141-162 In Wahrheit, Michael Dummett Stuttgart 1982 Dummett III (b) Michael Dummett "Frege’s Distiction between Sense and Reference", in: M. Dummett, Truth and Other Enigmas, London 1978, pp. 116-144 In Wahrheit, Stuttgart 1982 Dummett III (c) Michael Dummett "What is a Theory of Meaning?" in: S. Guttenplan (ed.) Mind and Language, Oxford 1975, pp. 97-138 In Wahrheit, Michael Dummett Stuttgart 1982 Dummett III (d) Michael Dummett "Bringing About the Past" in: Philosophical Review 73 (1964) pp.338-359 In Wahrheit, Michael Dummett Stuttgart 1982 Dummett III (e) Michael Dummett "Can Analytical Philosophy be Systematic, and Ought it to be?" in: Hegel-Studien, Beiheft 17 (1977) S. 305-326 In Wahrheit, Michael Dummett Stuttgart 1982 |
Complexes/Complexity | Chaitin | Barrow I 78 Complexity/Decidability/Paradox/Chaitin/Barrow: Order: "Print a sequence whose complexity can be proved to be greater than the length of this program!". The computer cannot respond to this. Each sequence that it generates must be of lesser complexity than the length of the sequence itself (and also of its program). (Neumann: a machine can only build another machine if it is one degree less complex than this one itself. (Kursbuch 8, 139 ff)(1). >J.v. Neumann. In the above case, the computer cannot decide whether the number R is random or not. Thus the Goedel theorem is proved. >Decisions, >Decidability, >Decision theory, >Decision-making process, >K. Gödel. In the late 1980s, even simpler evidence was found for the Goedel theorem, with which it was transformed into statements about information and randomness. Information content/Barrow: You can assign a certain amount of information to a system of axioms and rules by defining their information content as the size of the computer program that checks all the possible concluding chains. I 78/79 If one attempts to extend the bounds of provability by new axioms, there are still larger numbers, or sequences of numbers, whose randomness remains unprovable. Chaitin: he has proved with the Diophantic equation: X + y² = q If we look for solutions with positive integers for x and y, Chaitin asked,... I 80 ...whether such an equation is typically finite or has infinitely many integral solutions if we let q pass through all possible values q = 1,2,3,4 .... At first sight it hardly deviates from the original question, whether the equation for Q = 1,2,3 .. has an integer solution. However, Chaitin's question is infinitely more difficult to answer. The answer is random in the sense that it requires more information than is given in the problem. There is no way to a solution. Write for q 0 if the equation has only finitely many solutions, and 1, if there are infinitely many. The result is a series of ones and zeros representing a real number. Their value cannot be calculated by any computer. The individual spots are logically completely independent of each other. omega = 0010010101001011010 ... Then Chaitin transformed this number into a decimal number... I 81 ...omega = 0.0010010101001011010 ... and thus had the degree of probability that a randomly chosen computer program would eventually stop after a finite number of steps. It is always not equal to 0 and 1. Still another important consequence: if we choose any very large number for q, there is no way to decide whether the qth binary digit of the number omega is a zero or a one. Human thinking has no access to an answer to this question. The inevitable undecidability of some statements follows from the low complexity of the computer program, which is based on arithmetic, however. >Decision problem, >Software, >Computer programming. 1. Kursbuch 8: Mathematik. H. M. Enzensberger (Hg.), Frankfurt/M. 1967. |
B I John D. Barrow Warum die Welt mathematisch ist Frankfurt/M. 1996 B II John D. Barrow The World Within the World, Oxford/New York 1988 German Edition: Die Natur der Natur: Wissen an den Grenzen von Raum und Zeit Heidelberg 1993 B III John D. Barrow Impossibility. The Limits of Science and the Science of Limits, Oxford/New York 1998 German Edition: Die Entdeckung des Unmöglichen. Forschung an den Grenzen des Wissens Heidelberg 2001 |
Complexes/Complexity | Norvig | Norvig I 712 Complexity/AI Research/Norvig/Russell: [one way of reducing complexity is] model selection with cross-validation on model size. An alternative approach is to search for a hypothesis that directly minimizes the weighted sum of Norvig I 713 empirical loss and the complexity of the hypothesis, which we will call the total cost: Cost (h) = EmpLoss(h) + λ Complexity (h) ˆh ∗ = argmin Cost (h)/h∈H. Here λ is a parameter, a positive number that serves as a conversion rate between loss and hypothesis complexity (which after all are not measured on the same scale). This approach combines loss and complexity into one metric, allowing us to find the best hypothesis all at once. Regularization: This process of explicitly penalizing complex hypotheses is called regularization (because it looks for a function that is more regular, or less complex). Note that the cost function requires us to make two choices: the loss function and the complexity measure, which is called a regularization function. The choice of regularization function depends on the hypothesis space. Another way to simplify models is to reduce the dimensions that the models work with. A process of feature selection can be performed to discard attributes that appear to be irrelevant. Χ2 pruning is a kind of feature selection. MDL: The minimum description length or MDL hypothesis minimizes the total number of bits required. VsMDL: This works well in the limit, but for smaller problems there is a difficulty in that the choice of encoding for the program - for example, how best to encode a decision tree as a bit string - affects the outcome. >Learning theory/Norvig, >Learning/AI Research. Norvig I 759 History: Whereas the identification-in-the-limit approach concentrates on eventual convergence, the study of Kolmogorov complexity or algorithmic complexity, developed independently by Solomonoff (1964(1), 2009(2)) and Kolmogorov (1965)(3), attempts to provide a formal definition for the notion of simplicity used in Ockham’s razor. To escape the problem that simplicity depends on the way in which information is represented, it is proposed that simplicity be measured by the length of the shortest program for a universal Turing machine that correctly reproduces the observed data. Although there are many possible universal Turing machines, and hence many possible “shortest” programs, these programs differ in length by at most a constant that is independent of the amount of data. This beautiful insight, which essentially shows that any initial representation bias will eventually be overcome by the data itself, is marred only by the undecidability of computing the length of the shortest program. Approximate measures such as the minimum description length, or MDL (Rissanen, 1984(4), 2007(5)) can be used instead and have produced excellent results in practice. The text by Li and Vitanyi (1993)(6) is the best source for Kolmogorov complexity. Norvig I 762 The complexity of neural network learning has been investigated by researchers in computational learning theory. Early computational results were obtained by Judd (1990)(7), who showed that the general problem of finding a set of weights consistent with a set of examples is NP-complete, even under very restrictive assumptions. Some of the first sample complexity results were obtained by Baum and Haussler (1989)(8), who showed that the number of examples required for effective learning grows as roughly W logW, where W is the number of weights. Since then, a much more sophisticated theory has been developed (Anthony and Bartlett, 1999)(9), including the important result that the representational capacity of a network depends on the size of the weights as well as on their number, a result that should not be surprising in the light of our discussion of regularization. 1. Solomonoff, R. J. (1964). A formal theory of inductive inference. Information and Control, 7, 1–22, 224-254. 2. Solomonoff, R. J. (2009). Algorithmic probability-theory and applications. In Emmert-Streib, F. and Dehmer, M. (Eds.), Information Theory and Statistical Learning. Springer. 3. Kolmogorov, A. N. (1965). Three approaches to the quantitative definition of information. Problems in Information Transmission, 1(1), 1–7. 4. Rissanen, J. (1984). Universal coding, information, prediction, and estimation. IEEE Transactions on Information Theory, IT-30(4), 629-636. 5. Rissanen, J. (2007). Information and Complexity in Statistical Modeling. Springer. 6. Li, M. and Vitanyi, P. M. B. (1993). An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag. 7. Judd, J. S. (1990). Neural Network Design and the Complexity of Learning. MIT Press. 8. Baum, E. and Haussler, D. (1989). What size net gives valid generalization? Neural Computation, 1(1), 151160. 9. Anthony, M. and Bartlett, P. (1999). Neural Network Learning: Theoretical Foundations. Cambridge University Press. |
Norvig I Peter Norvig Stuart J. Russell Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010 |
Complexes/Complexity | Russell | Norvig I 712 Complexity/AI Research/Norvig/Russell: [one way of reducing complexity is] model selection with cross-validation on model size. An alternative approach is to search for a hypothesis that directly minimizes the weighted sum of Norvig I 713 empirical loss and the complexity of the hypothesis, which we will call the total cost: Cost (h) = EmpLoss(h) + λ Complexity (h) ˆh ∗ = argmin Cost (h)/h∈H. Here λ is a parameter, a positive number that serves as a conversion rate between loss and hypothesis complexity (which after all are not measured on the same scale). This approach combines loss and complexity into one metric, allowing us to find the best hypothesis all at once. Regularization: This process of explicitly penalizing complex hypotheses is called regularization (because it looks for a function that is more regular, or less complex). Note that the cost function requires us to make two choices: the loss function and the complexity measure, which is called a regularization function. The choice of regularization function depends on the hypothesis space. Another way to simplify models is to reduce the dimensions that the models work with. A process of feature selection can be performed to discard attributes that appear to be irrelevant. Χ2 pruning is a kind of feature selection. MDL: The minimum description length or MDL hypothesis minimizes the total number of bits required. VsMDL: This works well in the limit, but for smaller problems there is a difficulty in that the choice of encoding for the program - for example, how best to encode a decision tree as a bit string - affects the outcome. >Learning theory/Norvig, >Learning/AI Research. Norvig I 759 History: Whereas the identification-in-the-limit approach concentrates on eventual convergence, the study of Kolmogorov complexity or algorithmic complexity, developed independently by Solomonoff (1964(1), 2009(2)) and Kolmogorov (1965)(3), attempts to provide a formal definition for the notion of simplicity used in Ockham’s razor. To escape the problem that simplicity depends on the way in which information is represented, it is proposed that simplicity be measured by the length of the shortest program for a universal Turing machine that correctly reproduces the observed data. Although there are many possible universal Turing machines, and hence many possible “shortest” programs, these programs differ in length by at most a constant that is independent of the amount of data. This beautiful insight, which essentially shows that any initial representation bias will eventually be overcome by the data itself, is marred only by the undecidability of computing the length of the shortest program. Approximate measures such as the minimum description length, or MDL (Rissanen, 1984(4), 2007(5)) can be used instead and have produced excellent results in practice. The text by Li and Vitanyi (1993)(6) is the best source for Kolmogorov complexity. Norvig I 762 The complexity of neural network learning has been investigated by researchers in computational learning theory. Early computational results were obtained by Judd (1990)(7), who showed that the general problem of finding a set of weights consistent with a set of examples is NP-complete, even under very restrictive assumptions. Some of the first sample complexity results were obtained by Baum and Haussler (1989)(8), who showed that the number of examples required for effective learning grows as roughly W logW, where W is the number of weights. Since then, a much more sophisticated theory has been developed (Anthony and Bartlett, 1999)(9), including the important result that the representational capacity of a network depends on the size of the weights as well as on their number, a result that should not be surprising in the light of our discussion of regularization. 1. Solomonoff, R. J. (1964). A formal theory of inductive inference. Information and Control, 7, 1–22, 224-254. 2. Solomonoff, R. J. (2009). Algorithmic probability-theory and applications. In Emmert-Streib, F. and Dehmer, M. (Eds.), Information Theory and Statistical Learning. Springer. 3. Kolmogorov, A. N. (1965). Three approaches to the quantitative definition of information. Problems in Information Transmission, 1(1), 1–7. 4. Rissanen, J. (1984). Universal coding, information, prediction, and estimation. IEEE Transactions on Information Theory, IT-30(4), 629-636. 5. Rissanen, J. (2007). Information and Complexity in Statistical Modeling. Springer. 6. Li, M. and Vitanyi, P. M. B. (1993). An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag. 7. Judd, J. S. (1990). Neural Network Design and the Complexity of Learning. MIT Press. 8. Baum, E. and Haussler, D. (1989). What size net gives valid generalization? Neural Computation, 1(1), 151160. 9. Anthony, M. and Bartlett, P. (1999). Neural Network Learning: Theoretical Foundations. Cambridge University Press. |
Russell I B. Russell/A.N. Whitehead Principia Mathematica Frankfurt 1986 Russell II B. Russell The ABC of Relativity, London 1958, 1969 German Edition: Das ABC der Relativitätstheorie Frankfurt 1989 Russell IV B. Russell The Problems of Philosophy, Oxford 1912 German Edition: Probleme der Philosophie Frankfurt 1967 Russell VI B. Russell "The Philosophy of Logical Atomism", in: B. Russell, Logic and KNowledge, ed. R. Ch. Marsh, London 1956, pp. 200-202 German Edition: Die Philosophie des logischen Atomismus In Eigennamen, U. Wolf (Hg) Frankfurt 1993 Russell VII B. Russell On the Nature of Truth and Falsehood, in: B. Russell, The Problems of Philosophy, Oxford 1912 - Dt. "Wahrheit und Falschheit" In Wahrheitstheorien, G. Skirbekk (Hg) Frankfurt 1996 Norvig I Peter Norvig Stuart J. Russell Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010 |
Consistency | Tarski | Berka I 401 Consistency Proof/Gödel: a proof of consitency cannot be performed if the metalanguage does not contain higher type variables. >Metalanguage, >Levels, >Provability, cf. >Type theory. Undecidability: is eliminated when the examined theory (object language) is enriched with higher type variables. (1) >Object language. Berka I 474f Consistency/Logical Form/Tarski: is present when - for any statement x: either x ~ε FL(X) or ~x ~ε FL(x). ((s) Either x is not an inference from the system or its negation is not an inference.) But: Completeness: accordingly: if - for any statement x either x ε FL(X) or ~x ε FL(X) ((s) if either any statement or its negation is an inference from the system). I 529 f Law of Contradiction/Tarski: "x ~ε contradiction or ~x ~ε contradiction". We cannot make any generalization from the class of these statement functions. The generalization of these statement functions would itself be a (general) statement, namely the of the law of contradiction. Problem: infinite logical product that cannot be derived with normal methods of inference. I 531 Solution: "rule of infinite induction" - (differs from all other rules of inference by infinitist character).(2) 1. A.Tarski, „Grundlegung der wissenschaftlichen Semantik“, in: Actes du Congrès International de Philosophie Scientifique, Paris 1935, VOl. III, ASI 390, Paris 1936, pp. 1-8 2. A.Tarski, Der Wahrheitsbegriff in den formalisierten Sprachen, Commentarii Societatis philosophicae Polonorum. Vol 1, Lemberg 1935 |
Tarski I A. Tarski Logic, Semantics, Metamathematics: Papers from 1923-38 Indianapolis 1983 Berka I Karel Berka Lothar Kreiser Logik Texte Berlin 1983 |
Decidability | Dummett | II 103 Undecidability/anti-realism/Dummett: (without bivalence), meaning theory will be no longer purely descriptive in relation to our actual practice. ((s) When there are no examples that can serve as a >manifestation.) |
Dummett I M. Dummett The Origins of the Analytical Philosophy, London 1988 German Edition: Ursprünge der analytischen Philosophie Frankfurt 1992 Dummett II Michael Dummett "What ist a Theory of Meaning?" (ii) In Truth and Meaning, G. Evans/J. McDowell Oxford 1976 Dummett III M. Dummett Wahrheit Stuttgart 1982 Dummett III (a) Michael Dummett "Truth" in: Proceedings of the Aristotelian Society 59 (1959) pp.141-162 In Wahrheit, Michael Dummett Stuttgart 1982 Dummett III (b) Michael Dummett "Frege’s Distiction between Sense and Reference", in: M. Dummett, Truth and Other Enigmas, London 1978, pp. 116-144 In Wahrheit, Stuttgart 1982 Dummett III (c) Michael Dummett "What is a Theory of Meaning?" in: S. Guttenplan (ed.) Mind and Language, Oxford 1975, pp. 97-138 In Wahrheit, Michael Dummett Stuttgart 1982 Dummett III (d) Michael Dummett "Bringing About the Past" in: Philosophical Review 73 (1964) pp.338-359 In Wahrheit, Michael Dummett Stuttgart 1982 Dummett III (e) Michael Dummett "Can Analytical Philosophy be Systematic, and Ought it to be?" in: Hegel-Studien, Beiheft 17 (1977) S. 305-326 In Wahrheit, Michael Dummett Stuttgart 1982 |
Decidability | Hilbert | Berka I 331 Undecidability/Predicate calculus 1st level/Goedel(1931)(1): Goedel shows with the "Arithmetication" ("Goedelisation") that the predicate calculus of the 1st level is undecidable. >Undecidability, >Gödel numbers. This was a shocking fact for the Hilbert program. Tarski (1939)(2): Tarski proved the undecidability of "Principia Mathematica" and related systems. He showed that it is fundamental, i.e. that it cannot be abolished. Rosser(3): Rosser generalized Goedel's proof by replacing the condition of the ω-consistency by that of simple consistency. >Consistency. 1. K. Goedel: Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I., Mh. Math. Phys. 38, pp. 175-198. 2. A. Tarski: On undecidable statements in enlarged systems of logic and the concept of truth, JSL 4, pp. 105-112. 3. J. B. Rosser: Extensions of some theorems of Goedel and Church, JSL 1, pp. 87-91. |
Berka I Karel Berka Lothar Kreiser Logik Texte Berlin 1983 |
Decidability | Logic Texts | Hoyningen-Huene II 227 Decidability/undecidability/decision problem: propositional logic: is decidable and complete. Predicate logic: undecidable. There is no mechanical method by which for any predicate-logical formula, the decision can be brought about whether it is universally valid or not. >Validity, >Proof. |
Logic Texts Me I Albert Menne Folgerichtig Denken Darmstadt 1988 HH II Hoyningen-Huene Formale Logik, Stuttgart 1998 Re III Stephen Read Philosophie der Logik Hamburg 1997 Sal IV Wesley C. Salmon Logic, Englewood Cliffs, New Jersey 1973 - German: Logik Stuttgart 1983 Sai V R.M.Sainsbury Paradoxes, Cambridge/New York/Melbourne 1995 - German: Paradoxien Stuttgart 2001 |
Decidability | Quine | II 112 Decidability/Proof-Theoretical Analogy/Quine: the Concept of mechanical procedure is recursivity (e.g. to prove or even formulate Goedel's theorem or Church's theorem of undecidability.) But to prove the decidability of the theory we do not need a definition of the mechanical process, we simply present a method that everyone would call mechanical. >Proofs, >Provability. II 191ff, Undecidable Logics: is a general theory for a single symmetrical two-digit predicate. II 198 Also undecidable: is a general theory of two-digit formulas that have no quantifiers except (Ex)(y)(Ez). |
Quine I W.V.O. Quine Word and Object, Cambridge/MA 1960 German Edition: Wort und Gegenstand Stuttgart 1980 Quine II W.V.O. Quine Theories and Things, Cambridge/MA 1986 German Edition: Theorien und Dinge Frankfurt 1985 Quine III W.V.O. Quine Methods of Logic, 4th edition Cambridge/MA 1982 German Edition: Grundzüge der Logik Frankfurt 1978 Quine V W.V.O. Quine The Roots of Reference, La Salle/Illinois 1974 German Edition: Die Wurzeln der Referenz Frankfurt 1989 Quine VI W.V.O. Quine Pursuit of Truth, Cambridge/MA 1992 German Edition: Unterwegs zur Wahrheit Paderborn 1995 Quine VII W.V.O. Quine From a logical point of view Cambridge, Mass. 1953 Quine VII (a) W. V. A. Quine On what there is In From a Logical Point of View, Cambridge, MA 1953 Quine VII (b) W. V. A. Quine Two dogmas of empiricism In From a Logical Point of View, Cambridge, MA 1953 Quine VII (c) W. V. A. Quine The problem of meaning in linguistics In From a Logical Point of View, Cambridge, MA 1953 Quine VII (d) W. V. A. Quine Identity, ostension and hypostasis In From a Logical Point of View, Cambridge, MA 1953 Quine VII (e) W. V. A. Quine New foundations for mathematical logic In From a Logical Point of View, Cambridge, MA 1953 Quine VII (f) W. V. A. Quine Logic and the reification of universals In From a Logical Point of View, Cambridge, MA 1953 Quine VII (g) W. V. A. Quine Notes on the theory of reference In From a Logical Point of View, Cambridge, MA 1953 Quine VII (h) W. V. A. Quine Reference and modality In From a Logical Point of View, Cambridge, MA 1953 Quine VII (i) W. V. A. Quine Meaning and existential inference In From a Logical Point of View, Cambridge, MA 1953 Quine VIII W.V.O. Quine Designation and Existence, in: The Journal of Philosophy 36 (1939) German Edition: Bezeichnung und Referenz In Zur Philosophie der idealen Sprache, J. Sinnreich (Hg) München 1982 Quine IX W.V.O. Quine Set Theory and its Logic, Cambridge/MA 1963 German Edition: Mengenlehre und ihre Logik Wiesbaden 1967 Quine X W.V.O. Quine The Philosophy of Logic, Cambridge/MA 1970, 1986 German Edition: Philosophie der Logik Bamberg 2005 Quine XII W.V.O. Quine Ontological Relativity and Other Essays, New York 1969 German Edition: Ontologische Relativität Frankfurt 2003 Quine XIII Willard Van Orman Quine Quiddities Cambridge/London 1987 |
Decidability | Tarski | Berka I 543ff Undecidability/Gödel/Tarski: an undecidable statement is decidable in an enriched metascience. Cf. >Metalanguage, >Expressivity, >Semantic closure. Definability/Tarski: for every deductive science, which includes arithmetic,we can specify arithmetical terms that are not definable in it. Cf. >Ideology/Quine, >Ontology/Quine. I 545 But with methods that are used here in analogy, you can show that these terms can be defined on the basis of the considered science when enriched by variables of a higher order.(1) 1. A.Tarski, Der Wahrheitsbegriff in den formalisierten Sprachen, Commentarii Societatis philosophicae Polonorum. Vol 1, Lemberg 1935 |
Tarski I A. Tarski Logic, Semantics, Metamathematics: Papers from 1923-38 Indianapolis 1983 Berka I Karel Berka Lothar Kreiser Logik Texte Berlin 1983 |
Disjunction | Logic Texts | Read III 79 Disjunction / tautology / Read: In a sense, "A or B" follows from A alone - but then is not equivalent to "if ~ A, then B". Logical Constants. Undecidability: Re III 262 Not constructive: e.g. the proof that there are two irrational numbers a and b, so that a is highly b rational (the disjunction of alternatives is constructively unacceptable here). We have no construction by which we can determine whether root 2 to the power of root 2 is rational or not). The excluded third party is therefore intuitionistic and not a substantial assertion. >Undecidability, >Intuitionism. Goldbach's conjecture: every even number greater than two should be the sum of two prime numbers. Not decidable. But we must not claim that it is either true or not. Theorem of the Excluded Middle/Constructivism/Read: Constructivists often present so-called "weak counterexamples" against the Excluded Third. If a is a real number, "a= 0" is not decidable. Consequently, the constructivist cannot claim that all real numbers are either identical with zero or not. (But this is more a question of representation). >Excluded middle, >Goldbach's conjecture. |
Logic Texts Me I Albert Menne Folgerichtig Denken Darmstadt 1988 HH II Hoyningen-Huene Formale Logik, Stuttgart 1998 Re III Stephen Read Philosophie der Logik Hamburg 1997 Sal IV Wesley C. Salmon Logic, Englewood Cliffs, New Jersey 1973 - German: Logik Stuttgart 1983 Sai V R.M.Sainsbury Paradoxes, Cambridge/New York/Melbourne 1995 - German: Paradoxien Stuttgart 2001 Re III St. Read Thinking About Logic: An Introduction to the Philosophy of Logic. 1995 Oxford University Press German Edition: Philosophie der Logik Hamburg 1997 |
Errors | Peirce | Hacking I 105 Error/Peirce/undecidability/Peirce: an undecidable sentence cannot contain an error. >Undecidability. >Realism, >Anti-Realism. |
Peir I Ch. S. Peirce Philosophical Writings 2011 Hacking I I. Hacking Representing and Intervening. Introductory Topics in the Philosophy of Natural Science, Cambridge/New York/Oakleigh 1983 German Edition: Einführung in die Philosophie der Naturwissenschaften Stuttgart 1996 |
Falsification | Popper | I 122 Falsification/Popper: can always be overridden ad hoc. >Ad hoc hypotheses, >Quine-Duhem Thesis. --- I 123 Empirical scientific method: consists precisely in the exclusion of such procedures. - "Humean contradiction": only experience is allowed, but not conclusive - solution/Popper: not all sentences are fully decidable. - There must be particular empirical sentences as a major premise of falsifying conclusions. >Undecidability. --- I 127 These cannot be protocol sentences, because these are only psychological. >Protocol sentences. --- Stegmüller I 400ff Falsification/Popper: falsification itself must be repeatable - we can reformulate universal statements into "There are-not"-sentences to falsify them, e.g. "there are no non-white swans". Induction/Popper. Schurz I 15 Falsification/Asymmetry/Popper: The asymmetry is valid for strict (unexceptional all propositions): they cannot be verified by any finite set of observations but can be falsified by a single counterexample. LakatosVsPopper: Theories are never rejected on the basis of a single counterexample, but adapted. >Asymmetry. |
Po I Karl Popper The Logic of Scientific Discovery, engl. trnsl. 1959 German Edition: Grundprobleme der Erkenntnislogik. Zum Problem der Methodenlehre In Wahrheitstheorien, Gunnar Skirbekk Frankfurt/M. 1977 Carnap V W. Stegmüller Rudolf Carnap und der Wiener Kreis In Hauptströmungen der Gegenwartsphilosophie Bd I, München 1987 St I W. Stegmüller Hauptströmungen der Gegenwartsphilosophie Bd I Stuttgart 1989 St II W. Stegmüller Hauptströmungen der Gegenwartsphilosophie Bd 2 Stuttgart 1987 St III W. Stegmüller Hauptströmungen der Gegenwartsphilosophie Bd 3 Stuttgart 1987 St IV W. Stegmüller Hauptströmungen der Gegenwartsphilosophie Bd 4 Stuttgart 1989 Schu I G. Schurz Einführung in die Wissenschaftstheorie Darmstadt 2006 |
Incompleteness | Gödel | Thiel I 227 ff Incompleteness Theorem/Goedel/Thiel: ... this metamathematical statement corresponds in F to a one-digit statement form G(x) which then must occur somewhere in the counting sequence. If G(x) takes the h'th place, it is therefore identical with the propositional form called Ah(x) there. Goedel's result will be, that in F neither the proposition G(h) arising from G(x) by the insertion of h nor its negative ~G(h) is derivable. "Undecidable in F". Suppose G(h) is derivable in F, then only the derivation of true statements would be allowed, so G(h) would also be true. Thus, since G(x) was introduced as an image of $Ax(x) in F, $Ah(h) would be valid. But that would mean, since Ah(x) is identical with G(x), $G(h). G(h) would therefore be non-derivable in F - this is a contradiction. >Derivation, >Derivability. This derivation first only proves the validity of the "if-then-statement" S G(h)>$ G(h). This must now be inserted: (S G(h)>$ G(h))> $ G(h). This follows from the general scheme (A>~A)>~A. On the other hand, if we then assume that the negative ~G(h) is derivable, then ~G(h) would also be true. This would be equivalent to the validity of ~$ Ah(h) thus with S Ah(h). Thiel I 228 This in turn agrees with S G(h), so that both assertion and negative would be derivable, and we would have a formal contradiction. If F is contradiction-free at all, our second assumption S ~G(h) is not valid either. This is an undecidable assertion. Cf. >Decidability, >Indecidability. Thiel I 228 This proof sketch establishes a program. Important roles in the execution of this program are played by the "Goedelization" and the so-called "negative representability" of certain relations in F. Def Goedelization: Goedelization is first of all only a reversibly definite assignment of basic numbers to character sequences. We want to put the expressions of F into bracket-free form. >Goedel numbers. For this we write the logical connective signs not between, but in front of the expressions. We write the logical operators as "indices" to the order functor G. Terminology order functor G. Quantifiers: we treat quantifiers as two-digit functors whose first argument is the index, the second the quantified propositional form. >Quantifiers, >Quantification. Thiel I 229 Then the statement (x)(y)(z) ((x=y)>(zx = zy) gets the form (x)(y)(z)G > G = xyG = G times zxG times zy. We can represent the members of the infinite variable sequences in each case by a standard letter signaling the sort and e.g. prefixed points: thus for instance x,y,z,...by x,°x,°°x,...As counting character we take instead of |,||,|||,... zeros with a corresponding number of preceding dashes 0,'0,''0,... >Sequences. With this convention, each character in F is either a 0 or one of the one-digit functors G1 (the first order functor!), ', ~. Two-digit is G2, three-digit is G4, etc. Thiel I 229 E.g. Goedelization, Goedel number, Goedel number: Prime numbers are assigned in each case:.... Primes. Thiel I 230 In this way, each character string of F can be uniquely assigned a Goedel number and told how to compute it. Since every basic number has a unique representation as a product of prime numbers, it can be said of any given number whether it is a Goedel number of a character string of F at all. Metamathematical and arithmetical relations correspond to each other: example: Thiel I 230 We replace the x by 0 in ~G=x'x and obtain ~G = 0'0. The Goedel number of the first row is: 223 x 313 x 537 x 729 x 1137, the Goedel number of the second row of characters is: 223 x 313 x 531 x 729 x 1131. The transition from the Goedel number of the first row to that of the second row is made by division by 56 x 116 and this relation (of product and factor) is the arithmetic relation between their Goedel numbers corresponding to the metamathematical relation of the character rows. Thiel I 231 These relations are even effective, since one can effectively (Goedel says "recursively") compute the Goedel number of each member of the relation from those of its remaining members. >Recursion. The most important case is of course the relation Bxy between the Goedel number x, a proof figure Gz1...zk and the Goedel number y of its final sequence... Thiel I 233 "Negation-faithful representability": Goedel shows that for every recursive k-digit relation R there exists a k-digit propositional form A in F of the kind that A is derivable if R is valid, and ~A if R does not (..+..). We say that the propositional form A represents the relation R in F negation-faithfully. Thiel I 234 After all this, it follows that if F is ω-contradiction-free, then neither G nor ~G is derivable in F. G is an "undecidable statement in F". The occurrence of undecidable statements in this sense is not the same as the undecidability of F in the sense that there is no, as it were, mechanical procedure. >Decidability. Thiel I 236 It is true that there is no such decision procedure for F, but this is not the same as the shown "incompleteness", which can be seen from the fact that in 1930 Goedel had proved the classical quantifier logic as complete, but there is no decision procedure here, too. Def Incomplete/Thiel: a theory would only be incomplete if a true proposition about objects of the theory could be stated, which demonstrably could not be derived from the axiom system underlying the theory. ((s) Then the system would not be maximally consistent.) Whether this was done in the case of arithmetic by the construction of Goedel's statement G was for a long time answered in the negative, on the grounds that G was not a "true" arithmetic statement. This was settled about 20 years ago by the fact that combinatorial propositions were found, which are also not derivable in the full formalism. Goedel/Thiel: thus incompleteness can no longer be doubted. This is not a proof of the limits of human cognition, but only a proof of an intrinsic limit of the axiomatic method. Thiel I 238 ff One of the points of the proof of Goedel's "Underivability Theorem" was that the effectiveness of the metamathematical derivability relation corresponding to the self-evident effectiveness of all proofs in the full formalism F, has its exact counterpart in the recursivity of the arithmetic relations between the Goedel numbers of the proof figures and final formulas, and that this parallelism can be secured for all effectively decidable metamathematical relations and their arithmetic counterparts at all. >Derivation, >Derivability. |
Göd II Kurt Gödel Collected Works: Volume II: Publications 1938-1974 Oxford 1990 T I Chr. Thiel Philosophie und Mathematik Darmstadt 1995 |
Method | Tarski | Berka I 401 Consistent-proof/Gödel: cannot be performed if the meta language does not contain variables of higher type. >Metalanguage, >Expressivity, cf. >Type theory. Undecidability: Undecidability is eliminated when one enriches the examined theory (object language) with variables of higher type.(1) >Decidability. 1. A.Tarski, „Grundlegung der wissenschaftlichen Semantik“, in: Actes du Congrès International de Philosophie Scientifique, Paris 1935, VOl. III, ASI 390, Paris 1936, pp. 1-8 --- I 462 Meta language/Tarski: is our real examination object. ((s) because of the application conditions of the truth concept). I 464 Meta language/Tarski: 2nd category of expressions: specific terms of structural-descriptive character. >Structural-descriptive name. Names of specific signs and expressions of the class calculus, names of classes names of sequences of such expressions and of structural relations between them, Any expression of the considered language (object language) one can allocate - on the one hand an individual name of this expression, and - on the other hand an expression that is the translation of this expression in the meta language. That is decisive for the construction of the truth-definition. >Truth definition/Tarski. I 464 Name/translation/meta language/object language/Tarski: difference: an expression of the object language can in the meta language a) be given a name, or b) be a translation. Berka I 525 Morphology/Tarski: our meta language includes here the entire object language - that is, for us only logical expressions of the general class theory. - That is, only structural-descriptive terms. >Homophony. So we have the morphology of the language, that is, even the concept of inference is traced back. I 526 Thus we have justified the logic of this studied science as a part of the morphology.(2) >Description levels, >Semantic closure. 2. A.Tarski, Der Wahrheitsbegriff in den formalisierten Sprachen, Commentarii Societatis philosophicae Polonorum. Vol 1, Lemberg 1935 |
Tarski I A. Tarski Logic, Semantics, Metamathematics: Papers from 1923-38 Indianapolis 1983 Berka I Karel Berka Lothar Kreiser Logik Texte Berlin 1983 |
Modal Logic | Kripke | Berka I 161 Modal logic/undecidability: Kripke (1962)(1) proved the undecidability of the modal bivalent prediacte calculus (functions with one argument). >Decidability, >Completeness. 1.S.A. Kripke, The Undecidability of Monadic Modal Quantification Theory in Zeitschrift für mathematische Logik und Grundlagen der Mathematik, Vol. 8, pp. 113-116, 1962. |
Kripke I S.A. Kripke Naming and Necessity, Dordrecht/Boston 1972 German Edition: Name und Notwendigkeit Frankfurt 1981 Kripke II Saul A. Kripke "Speaker’s Reference and Semantic Reference", in: Midwest Studies in Philosophy 2 (1977) 255-276 In Eigennamen, Ursula Wolf Frankfurt/M. 1993 Kripke III Saul A. Kripke Is there a problem with substitutional quantification? In Truth and Meaning, G. Evans/J McDowell Oxford 1976 Kripke IV S. A. Kripke Outline of a Theory of Truth (1975) In Recent Essays on Truth and the Liar Paradox, R. L. Martin (Hg) Oxford/NY 1984 Berka I Karel Berka Lothar Kreiser Logik Texte Berlin 1983 |
Possible Worlds | Kripke | I 51f The expressions "winners" and "losers" do not refer to the same objects in all possible worlds. >Rigidity. I 51 Proper names are rigid designators: Nixon is Nixon in all possible worlds, but he is not the winner of the election in all the possible worlds (descriptions are non-rigid designators). >Names/Kripke. I 54 Possible worlds are no foreign countries. A possible world is given by the descriptive conditions we associate it with. Cf. >Telescope theory of possible worlds. I 55 Possible world/Lewis: possible worlds are counterparts, not the same people. Kripke: then it is not about identification but about similarity relation. >Counterparts, >Counterpart theory, >Counterpart relation, >Possible world/Lewis, >Identity across worlds. I 90/91 We do not demand that the objects must exist in all possible worlds of course. Possible world/counterparts: strict identity: are molecules. Counterparts: are for example, tables (not identity of qualities, but of individual objects). Counterpart/Lewis: representatives of the theories that a possible world is only given qualitatively to us ("counterpart theory", David Lewis) argue that Aristotle and his counterparts "in other possible worlds" are "to be identified" with those things that Aristotle resembles most in his most important characteristics. I 123 ff Remember, though, that we describe the situation in our language, not in the language that would be used by people in that situation. Hesperus = Phosphorus is necessarily true (but situation possible in which Venus does not exist). >Morning star/evening star, >Nonexistence. I 143 Epistems: epistems are a different concept of possibility than in logic. The designation is done by us. >Naming/Kripke. --- Berka I 161 Def normal world/Kripke: a normal world is a maximum consistent set of sentences in which at least one statement is necessary. Def non-normal world/Kripke: in non-normal worlds each sentence of the type LB is false. Berka I 179 Definition possible world/Kripke: old: (1959)(1) a world is possible with the complete attribution of truth value, i.e. it is impossible to find two possible worlds in which each atomic formula is attributed to the same truth value (absolute concept of the possible world). New definition: (1963)(2): a world is possible in relation to another world (relatively possible world) Hughes/Cresswell: > accessibility relation. Reflexive accessibility: each possible world is in itself, i.e. that each statement that is true in H is also possible in H. Definition necessary: a formula A in H if it is true in every (possible) world accessible from H. Definition possible: dual to this: if A is possible in H1, iff a world H2 exists, which is possible in relation to H1, and true in A. Transitivity: H2RH3: any formula that is true in H3 is possible in H2. Problem: for traceability to H1 we need a reduction axiom: "what is possibly possible is possible" - you can also set the equivalence relation as accessibility relation. --- Hughes/Cresswell I 243 Non-normal world/possible world/Kripke: non-normal worlds are worlds in which each statement is possible without exception, i.e. including those of the form p. ~p rating: like in normal worlds (p ~ p.) Never 1 - but for modal formulas V (Ma) is always 1 in non-normal worlds, and hence V(La) is always 0, i.e. there are no necessary statements in non-normal worlds. this n-n world is at least accessible for a normal world, but no world is accessible to a n-n world - not even for these themselves. --- Frank I 114 Identity/Kripke: if an identity statement is true, it is always necessarily true, e.g. heat/motion of molecules, Cicero/Tullius, Water/H20 - these are compatible with the fact that they are truths a posteriori. But according to Leibniz: they it is not conceivable that one occurs without the other. Frank I 125 Identity/body/Kripke: "A" is the (rigid) name for the body of Descartes - it survived the body, i.e.: M (Descartes unequal A). This is not a modal fallacy, because A is rigid. Analog: a statue is dissimilar to molecule collection. 1) S.A. Kripke (1959): "A completeness theorem in modal logic", in: The journal of symbolic logic 24 (1), pp. 1-14. 2) S.A. Kripke (1962): The Undecidability of Monadic Modal Quantification Theory, in: Zeitschrift für mathematische Logik und Grundlagen der Mathematik, Vol. 8, pp. 113-116. |
Kripke I S.A. Kripke Naming and Necessity, Dordrecht/Boston 1972 German Edition: Name und Notwendigkeit Frankfurt 1981 Kripke II Saul A. Kripke "Speaker’s Reference and Semantic Reference", in: Midwest Studies in Philosophy 2 (1977) 255-276 In Eigennamen, Ursula Wolf Frankfurt/M. 1993 Kripke III Saul A. Kripke Is there a problem with substitutional quantification? In Truth and Meaning, G. Evans/J McDowell Oxford 1976 Kripke IV S. A. Kripke Outline of a Theory of Truth (1975) In Recent Essays on Truth and the Liar Paradox, R. L. Martin (Hg) Oxford/NY 1984 Berka I Karel Berka Lothar Kreiser Logik Texte Berlin 1983 Cr I M. J. Cresswell Semantical Essays (Possible worlds and their rivals) Dordrecht Boston 1988 Cr II M. J. Cresswell Structured Meanings Cambridge Mass. 1984 Fra I M. Frank (Hrsg.) Analytische Theorien des Selbstbewusstseins Frankfurt 1994 |
Prior Knowledge | Norvig | Norvig I 777 Prior knowledge/AI Research/Norvig/Russell: To understand the role of prior knowledge, we need to talk about the logical relationships among hypotheses, example descriptions, and classifications. Let Descriptions denote the conjunction of all the example descriptions in the training set, and let Classifications denote the conjunction of all the example classifications. Then a Hypothesis that “explains the observations” must satisfy the following property (recall that |= means “logically entails”): Hypothesis ∧ Descriptions |= Classifications. Entailment constraint: We call this kind of relationship an entailment constraint, in which Hypothesis is the “un-known.” Pure inductive learning means solving this constraint, where Hypothesis is drawn from some predefined hypothesis space. >Hypotheses/AI Research. Software agents/knowledge/learning/Norvig: The modern approach is to design agents that already know something and are trying to learn some more. An autonomous learning agent that uses background knowledge must somehow obtain the background knowledge in the first place (…). This method must itself be a learning process. The agent’s life history will therefore be characterized by cumulative, or incremental, development. Norvig I 778 Learning with background knowledge: allows much faster learning than one might expect from a pure induction program. Explanation based learning/EBL: the entailment constraints satisfied by EBL are the following: Hypothesis ∧ Descriptions |= Classifications Background |= Hypothesis. Norvig I 779 (…) it was initially thought to be a way to learn from examples. But because it requires that the background knowledge be sufficient to explain the hypothesis, which in turn explains the observations, the agent does not actually learn anything factually new from the example. The agent could have derived the example from what it already knew, although that might have required an unreasonable amount of computation. EBL is now viewed as a method for converting first-principles theories into useful, special purpose knowledge. Relevance/observations/RBL: the prior knowledge background concerns the relevance of a set of features to the goal predicate. This knowledge, together with the observations, allows the agent to infer a new, general rule that explains the observations: Hypothesis ∧ Descriptions |= Classifications , Background ∧ Descriptions ∧ Classifications |= Hypothesis. We call this kind of generalization relevance-based learning, or RBL. (…) whereas RBL does make use of the content of the observations, it does not produce hypotheses that go beyond the logical content of the background knowledge and the observations. It is a deductive form of learning and cannot by itself account for the creation of new knowledge starting from scratch. Entailment constraint: Background ∧ Hypothesis ∧ Descriptions |= Classifications. That is, the background knowledge and the new hypothesis combine to explain the examples. Knowledge-based inductive learning/KBIL algorithms: Algorithms that satisfy [the entailment] constraint are called knowledge-based inductive learning, or KBIL, algorithms. KBIL algorithms, (…) have been studied mainly in the field of inductive logic programming, or ILP. Norvig I 780 Explanation-based learning: The basic idea of memo functions is to accumulate a database of input–output pairs; when the function is called, it first checks the database to see whether it can avoid solving the problem from scratch. Explanation-based learning takes this a good deal further, by creating general rules that cover an entire class of cases. Norvig I 781 General rules: The basic idea behind EBL is first to construct an explanation of the observation using prior knowledge, and then to establish a definition of the class of cases for which the same explanation structure can be used. This definition provides the basis for a rule covering all of the cases in the class. Explanation: The “explanation” can be a logical proof, but more generally it can be any reasoning or problem-solving process whose steps are well defined. The key is to be able to identify the necessary conditions for those same steps to apply to another case. Norvig I 782 EBL: 1. Given an example, construct a proof that the goal predicate applies to the example using the available background knowledge. Norvig I 783 2. In parallel, construct a generalized proof tree for the variabilized goal using the same inference steps as in the original proof. 3. Construct a new rule whose left-hand side consists of the leaves of the proof tree and whose right-hand side is the variabilized goal (after applying the necessary bindings from the generalized proof). 4. Drop any conditions from the left-hand side that are true regardless of the values of the variables in the goal. Norvig I 794 Inverse resolution: Inverse resolution is based on the observation that if the example Classifications follow from Background ∧ Hypothesis ∧ Descriptions, then one must be able to prove this fact by resolution (because resolution is complete). If we can “run the proof backward,” then we can find a Hypothesis such that the proof goes through. Norvig I 795 Inverse entailment: The idea is to change the entailment constraint Background ∧ Hypothesis ∧ Descriptions |= Classifications to the logically equivalent form Background ∧ Descriptions ∧ ¬Classifications |= ¬Hypothesis. Norvig I 796 An inverse resolution procedure that inverts a complete resolution strategy is, in principle, a complete algorithm for learning first-order theories. That is, if some unknown Hypothesis generates a set of examples, then an inverse resolution procedure can generate Hypothesis from the examples. This observation suggests an interesting possibility: Suppose that the available examples include a variety of trajectories of falling bodies. Would an inverse resolution program be theoretically capable of inferring the law of gravity? The answer is clearly yes, because the law of gravity allows one to explain the examples, given suitable background mathematics. Norvig I 798 Literature: The current-best-hypothesis approach is an old idea in philosophy (Mill, 1843)(1). Early work in cognitive psychology also suggested that it is a natural form of concept learning in humans (Bruner et al., 1957)(2). In AI, the approach is most closely associated with the work of Patrick Winston, whose Ph.D. thesis (Winston, 1970)(3) addressed the problem of learning descriptions of complex objects. Version space: The version space method (Mitchell, 1977(4), 1982(5)) takes a different approach, maintaining the set of all consistent hypotheses and eliminating thosefound to be inconsistent with new examples. The approach was used in the Meta-DENDRAL Norvig I 799 expert system for chemistry (Buchanan and Mitchell, 1978)(6), and later in Mitchell’s (1983)(7) LEX system, which learns to solve calculus problems. A third influential thread was formed by the work of Michalski and colleagues on the AQ series of algorithms, which learned sets of logical rules (Michalski, 1969(8); Michalski et al., 1986(9)). EBL: EBL had its roots in the techniques used by the STRIPS planner (Fikes et al., 1972)(10). When a plan was constructed, a generalized version of it was saved in a plan library and used in later planning as a macro-operator. Similar ideas appeared in Anderson’s ACT* architecture, under the heading of knowledge compilation (Anderson, 1983)(11), and in the SOAR architecture, as chunking (Laird et al., 1986)(12). Schema acquisition (DeJong, 1981)(13), analytical generalization (Mitchell, 1982)(5), and constraint-based generalization (Minton, 1984)(14) were immediate precursors of the rapid growth of interest in EBL stimulated by the papers of Mitchell et al. (1986)(15) and DeJong and Mooney (1986)(16). Hirsh (1987) introduced the EBL algorithm described in the text, showing how it could be incorporated directly into a logic programming system. Van Harmelen and Bundy (1988)(18) explain EBL as a variant of the partial evaluation method used in program analysis systems (Jones et al., 1993)(19). VsEBL: Initial enthusiasm for EBL was tempered by Minton’s finding (1988)(20) that, without extensive extra work, EBL could easily slow down a program significantly. Formal probabilistic analysis of the expected payoff of EBL can be found in Greiner (1989)(21) and Subramanian and Feldman (1990)(22). An excellent survey of early work on EBL appears in Dietterich (1990)(23). Relevance: Relevance information in the form of functional dependencies was first developed in the database community, where it is used to structure large sets of attributes into manageable subsets. Functional dependencies were used for analogical reasoning by Carbonell and Collins (1973)(24) and rediscovered and given a full logical analysis by Davies and Russell (Davies, 1985(25); Davies and Russell, 1987(26)). Prior knowledge: Their role as prior knowledge in inductive learning was explored by Russell and Grosof (1987)(27). The equivalence of determinations to a restricted-vocabulary hypothesis space was proved in Russell (1988)(28). Learning: Learning algorithms for determinations and the improved performance obtained by RBDTL were first shown in the FOCUS algorithm, due to Almuallim and Dietterich (1991)(29). Tadepalli (1993)(30) describes a very ingenious algorithm for learning with determinations that shows large improvements in earning speed. Inverse deduction: The idea that inductive learning can be performed by inverse deduction can be traced to W. S. Jevons (1874)(31) (…). Computational investigations began with the remarkable Ph.D. thesis by Norvig I 800 Gordon Plotkin (1971)(32) at Edinburgh. Although Plotkin developed many of the theorems and methods that are in current use in ILP, he was discouraged by some undecidability results for certain subproblems in induction. MIS (Shapiro, 1981)(33) reintroduced the problem of learning logic programs, but was seen mainly as a contribution to the theory of automated debugging. Induction/rules: Work on rule induction, such as the ID3 (Quinlan, 1986)(34) and CN2 (Clark and Niblett, 1989)(35) systems, led to FOIL (Quinlan, 1990)(36), which for the first time allowed practical induction of relational rules. Relational Learning: The field of relational learning was reinvigorated by Muggleton and Buntine (1988)(37), whose CIGOL program incorporated a slightly incomplete version of inverse resolution and was capable of generating new predicates. The inverse resolution method also appears in (Russell, 1986)(38), with a simple algorithm given in a footnote. The next major system was GOLEM (Muggleton and Feng, 1990)(39), which uses a covering algorithm based on Plotkin’s concept of relative least general generalization. ITOU (Rouveirol and Puget, 1989)(40) and CLINT (De Raedt, 1992)(41) were other systems of that era. Natural language: More recently, PROGOL (Muggleton, 1995)(42) has taken a hybrid (top-down and bottom-up) approach to inverse entailment and has been applied to a number of practical problems, particularly in biology and natural language processing. Uncertainty: Muggleton (2000)(43) describes an extension of PROGOL to handle uncertainty in the form of stochastic logic programs. Inductive logic programming /ILP: A formal analysis of ILP methods appears in Muggleton (1991)(44), a large collection of papers in Muggleton (1992)(45), and a collection of techniques and applications in the book by Lavrauc and Duzeroski (1994)(46). Page and Srinivasan (2002)(47) give a more recent overview of the field’s history and challenges for the future. Early complexity results by Haussler (1989) suggested that learning first-order sentences was intractible. However, with better understanding of the importance of syntactic restrictions on clauses, positive results have been obtained even for clauses with recursion (Duzeroski et al., 1992)(48). Learnability results for ILP are surveyed by Kietz and Duzeroski (1994)(49) and Cohen and Page (1995)(50). Discovery systems/VsILP: Although ILP now seems to be the dominant approach to constructive induction, it has not been the only approach taken. So-called discovery systems aim to model the process of scientific discovery of new concepts, usually by a direct search in the space of concept definitions. Doug Lenat’s Automated Mathematician, or AM (Davis and Lenat, 1982)(51), used discovery heuristics expressed as expert system rules to guide its search for concepts and conjectures in elementary number theory. Unlike most systems designed for mathematical reasoning, AM lacked a concept of proof and could only make conjectures. It rediscovered Goldbach’s conjecture and the Unique Prime Factorization theorem. AM’s architecture was generalized in the EURISKO system (Lenat, 1983)(52) by adding a mechanism capable of rewriting the system’s own discovery heuristics. EURISKO was applied in a number of areas other than mathematical discovery, although with less success than AM. The methodology of AM and EURISKO has been controversial (Ritchie and Hanna, 1984; Lenat and Brown, 1984). 1. Mill, J. S. (1843). A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence, and Methods of Scientific Investigation. J.W. Parker, London. 2. Bruner, J. S., Goodnow, J. J., and Austin, G. A. (1957). A Study of Thinking. Wiley. 3. Winston, P. H. (1970). Learning structural descriptions from examples. Technical report MAC-TR-76, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology. 4. Mitchell, T.M. (1977). Version spaces: A candidate elimination approach to rule learning. In IJCAI-77, pp. 305–310. 5. Mitchell, T. M. (1982). Generalization as search. AIJ, 18(2), 203–226. 6. Buchanan, B. G.,Mitchell, T.M., Smith, R. G., and Johnson, C. R. (1978). Models of learning systems. In Encyclopedia of Computer Science and Technology, Vol. 11. Dekker. 7. Mitchell, T. M., Utgoff, P. E., and Banerji, R. (1983). Learning by experimentation: Acquiring and refining problem-solving heuristics. In Michalski, R. S., Carbonell, J. G., and Mitchell, T. M. (Eds.), Machine Learning: An Artificial Intelligence Approach, pp. 163–190. Morgan Kaufmann. 8. Michalski, R. S. (1969). On the quasi-minimal solution of the general covering problem. In Proc. First International Symposium on Information Processing, pp. 125–128. 9. Michalski, R. S.,Mozetic, I., Hong, J., and Lavrauc, N. (1986). The multi-purpose incremental learning system AQ15 and its testing application to three medical domains. In AAAI-86, pp. 1041–1045. 10. Fikes, R. E., Hart, P. E., and Nilsson, N. J. (1972). Learning and executing generalized robot plans. AIJ, 3(4), 251–288. 11. Anderson, J. R. (1983). The Architecture of Cognition. Harvard University Press. 12. Laird, J., Rosenbloom, P. S., and Newell, A. (1986). Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1, 11–46. 13. DeJong, G. (1981). Generalizations based on explanations. In IJCAI-81, pp. 67–69. 14. Minton, S. (1984). Constraint-based generalization: Learning game-playing plans from single examples. In AAAI-84, pp. 251–254. 15. Mitchell, T. M., Keller, R., and Kedar-Cabelli, S. (1986). Explanation-based generalization: A unifying view. Machine Learning, 1, 47–80. 16. DeJong, G. and Mooney, R. (1986). Explanation-based learning: An alternative view. Machine Learning, 1, 145–176. 17. Hirsh, H. (1987). Explanation-based generalization in a logic programming environment. In IJCAI-87. 18. van Harmelen, F. and Bundy, A. (1988). Explanation-based generalisation = partial evaluation. AIJ, 36(3), 401–412. 19. Jones, N. D., Gomard, C. K., and Sestoft, P. (1993). Partial Evaluation and Automatic Program Generation. Prentice-Hall. 20. Minton, S. (1988). Quantitative results concerning the utility of explanation-based learning. In AAAI-88, pp. 564–569. 21. Greiner, R. (1989). Towards a formal analysis of EBL. In ICML-89, pp. 450–453. 22. Subramanian, D. and Feldman, R. (1990). The utility of EBL in recursive domain theories. In AAAI-90, Vol. 2, pp. 942–949. 23. Dietterich, T. (1990). Machine learning. Annual Review of Computer Science, 4, 255–306. 24. Carbonell, J. R. and Collins, A. M. (1973). Natural semantics in artificial intelligence. In IJCAI-73, pp. 344–351. 25. Davies, T. R. (1985). Analogy. Informal note INCSLI- 85-4, Center for the Study of Language and Information (CSLI). 26. Davies, T. R. and Russell, S. J. (1987). A logical approach to reasoning by analogy. In IJCAI-87, Vol. 1, pp. 264–270. 27. Russell, S. J. and Grosof, B. (1987). A declarative approach to bias in concept learning. In AAAI-87. 28. Russell, S. J. (1988). Tree-structured bias. In AAAI-88, Vol. 2, pp. 641–645. 29. Almuallim, H. and Dietterich, T. (1991). Learning with many irrelevant features. In AAAI-91, Vol. 2, pp. 547–552. 30. Tadepalli, P. (1993). Learning from queries and examples with tree-structured bias. In ICML-93, pp. 322–329. 31. Jevons, W. S. (1874). The Principles of Science. Routledge/Thoemmes Press, London. 32. Plotkin, G. (1971). Automatic Methods of Inductive Inference. Ph.D. thesis, Edinburgh University. 33. Shapiro, E. (1981). An algorithm that infers theories from facts. In IJCAI-81, p. 1064. 34. Quinlan, J. R. (1986). Induction of decision trees. Machine Learning, 1, 81–106. 35. Clark, P. and Niblett, T. (1989). The CN2 induction algorithm. Machine Learning, 3, 261–283. 36. Quinlan, J. R. (1990). Learning logical definitions from relations. Machine Learning, 5(3), 239–266. 37. Muggleton, S. H. and Buntine, W. (1988). Machine invention of first-order predicates by inverting resolution. In ICML-88, pp. 339–352. 38. Russell, S. J. (1986). A quantitative analysis of analogy by similarity. In AAAI-86, pp. 284–288. 39. Muggleton, S. H. and Feng, C. (1990). Efficient induction of logic programs. In Proc. Workshop on Algorithmic Learning Theory, pp. 368–381. 40. Rouveirol, C. and Puget, J.-F. (1989). A simple and general solution for inverting resolution. In Proc. European Working Session on Learning, pp. 201–210. 41. De Raedt, L. (1992). Interactive Theory Revision: An Inductive Logic Programming Approach. Academic Press. 42. Muggleton, S. H. (1995). Inverse entailment and Progol. New Generation Computing, 13(3-4), 245- 286. 43. Muggleton, S. H. (2000). Learning stochastic logic programs. Proc. AAAI 2000 Workshop on Learning Statistical Models from Relational Data. 44. Muggleton, S. H. (1991). Inductive logic programming. New Generation Computing, 8, 295–318. 45. Muggleton, S. H. (1992). Inductive Logic Programming. Academic Press. 46. Lavrauc, N. and Duzeroski, S. (1994). Inductive Logic Programming: Techniques and Applications. Ellis Horwood 47. Page, C. D. and Srinivasan, A. (2002). ILP: A short look back and a longer look forward. Submitted to Journal of Machine Learning Research. 48. Duzeroski, S., Muggleton, S. H., and Russell, S. J. (1992). PAC-learnability of determinate logic programs. In COLT-92, pp. 128–135. 49. Kietz, J.-U. and Duzeroski, S. (1994). Inductive logic programming and learnability. SIGART Bulletin, 5(1), 22–32. 50. Cohen, W. W. and Page, C. D. (1995). Learnability in inductive logic programming: Methods and results. New Generation Computing, 13(3–4), 369-409. 51. Davis, R. and Lenat, D. B. (1982). Knowledge-Based Systems in Artificial Intelligence. McGraw- Hill. 52. Lenat, D. B. (1983). EURISKO: A program that learns new heuristics and domain concepts: The nature of heuristics, III: Program design and results. AIJ, 21(1–2), 61–98. 53. Ritchie, G. D. and Hanna, F. K. (1984). AM: A case study in AI methodology. AIJ, 23(3), 249–268. 54. Lenat, D. B. and Brown, J. S. (1984). Why AM and EURISKO appear to work. AIJ, 23(3), 269–294. |
Norvig I Peter Norvig Stuart J. Russell Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010 |
![]() |
Disputed term/author/ism | Author Vs Author![]() |
Entry![]() |
Reference![]() |
---|---|---|---|
Anti-Objectivism | Field Vs Anti-Objectivism | II 318 Undecidability/VsAnti-Objectivism/AO/Field: other examples are less favorable for the anti-objectivism: E.g. Gödel. Even very simple sentences may be undecidable. E.g. (*) for all natural numbers x, B(x) where B(x) is a decidable predicate, i.e. a predicate, so that for each numeral n we can either prove B(n) or ~B(n). (Through an uncontroversial proof). Problem: you may say now that every undecidable sentence must be objectively correct (see above, must follow from the axioms). Then proof of ~B(n) would be proof of the negation of (*), as opposed to its undecidability. So, because of the assumption about B(x) B(n) must be provable for each number n, thus presumably objectively correct. This seems to show, however, that the generalization (*) is also objectively correct. (This is not undisputed, because it requires as a final step that it is objectively the case that there are no other natural numbers than those for which there are names. ((s)> "not enough names"). FieldVs extreme Anti-Objectivism: if that can be believed, however, he must adopt a more moderate position. Elementary Number Theory/ENT/Undecidability/Field: in fact, almost everyone believes that the choice between an undecidable proposition and its negation is objective, also for the generalized ENT. That would be hard to give up, because many assertions about provability and consistency are actually undecidable number-theoretic assertions, so that the anti-objectivist would have to say that they lack objectivity. Only few of them want that. Nevertheless, it is not obvious that if the ENT is granted objectivity, it would also have to be conceded to the higher regions. I 347 Anti-Objectivism/Gödel/Field/Conclusion/(s): Gödel gives no reason to assume that some undecidable propositions have certain truth values. (pro extreme anti-objectivism, by Field). VsAnti-Objectivism/Gödel/Field: It may be objected that the Gödel sentences of the candidates for our most mathematical theory should not only have a certain truth value, but that they are true! The argument goes by. Induction: all logical and non-logical premises of M are true. The rules of inference receive truth, therefore, all theorems must be true. So the theory must be consistent, therefore the Gödel sentence must be unprovable and therefore true. Gödel sentence: is true only if unprovable; if provable, it is not true. Problem: this induction can of course not be formalized in M. But one often feels that it is somehow "informally valid". If that is true, only the truth of the Gödel theorem is proved, not its particular truth. Solution: we might be able to fill the gap by establishing a principle that if we can prove something informally, it must certainly be true. (Vs: That’s plausible, but not undisputed!). In any case, the arguments for the particular truth of the Gödel theorem are weaker than those for its simple truth. |
Field I H. Field Realism, Mathematics and Modality Oxford New York 1989 Field II H. Field Truth and the Absence of Fact Oxford New York 2001 Field III H. Field Science without numbers Princeton New Jersey 1980 Field IV Hartry Field "Realism and Relativism", The Journal of Philosophy, 76 (1982), pp. 553-67 In Theories of Truth, Paul Horwich Aldershot 1994 |
Deflationism | Wright Vs Deflationism | I 26 Truth: is there a concept of truth that is free of metaphysical obligations and yet assertoric? Deflation/Deflationism/Deflationary Approach: Ramsey was the first here. (Recently: Horwich: "Minimalism"): Truth assertoric (asserting, but not supported by assumption of metaphysical objects or facts). Tarski's quoting is sufficient. Truth is not a substantial property of sentences. True sentences like "snow is white" and "grass is green" have nothing in common! Important: you can use the disquotation scheme without understanding the content! You can "approach" the predicate "true". (Goldbach's conjecture). Deflationism Thesis: the content of the predicate of truth is the same as the claim its assertoric use makes. WrightVsDeflationism: instead "minimal truth ability", "minimal truth" here "minimalism": core existence of recognized standards. I 35 Legitimate Assertiveness/Assertibility/Negation: Example "It is not the case that "P" is T then and only if it is not the case that "P" is T. This is not valid for legitimate assertiveness from right to left! Namely, if the level of information is neutral (undecidable). (But for truth)(neutrality, >undecidability). It is then correct to claim that it is not the case that P is assertible, but incorrect to claim that the negation of P is justifiably assertible. Therefore, we must distinguish between "T" and "assertible". "("assertible": from now on for "legitimate assertible"). (VsDeflationism that recognizes only one norm.) I 47 VsDeflationism: not a theory, but a "potpourri". There is no unambiguous thesis at all. I 48 InflationismVsDeflationism: (uncertain) DS' "P" is true(E!P)("P" says that P & P) (! = that which exists enough for P) I 53 Minimalism/Wright: recognizes, in contrast to deflationism, that truth is a real property. The possession of this property is normatively different from legitimate assertiveness. (VsDeflationism). I 97 WrightVsDeflationism Thesis: the classical deflationary view of truth is in itself unstable. No norm of the predicate of truth can state that it differs from legitimate assertiveness. With this consequence, however, the central role ascribed to the quotation scheme - and thus also to negation equivalence - is not compatible. The normative power of "true" and "justifiably claimable" coincides, but can potentially diverge extensionally. |
WrightCr I Crispin Wright Truth and Objectivity, Cambridge 1992 German Edition: Wahrheit und Objektivität Frankfurt 2001 WrightCr II Crispin Wright "Language-Mastery and Sorites Paradox" In Truth and Meaning, G. Evans/J. McDowell Oxford 1976 WrightGH I Georg Henrik von Wright Explanation and Understanding, New York 1971 German Edition: Erklären und Verstehen Hamburg 2008 |
Field, H. | Verschiedene Vs Field, H. | Field I 51 Infinity/Physics/Essay 4: even without "part of" relation we do not really need the finity operator for physics. VsField: many have accused me of needing every extension of 1st level logic. But this is not the case. I 52 I rather assume that the nominalization program has not yet been advanced far enough to be able to say what the best logical basis is. Ultimately, we are going to choose only a few natural means that go beyond the 1st level logic, preferably those that the Platonist would also need. But we can only experience this by trial and error. I 73 Indispensability Argument/Logic/VsField: if mE may be dispensable in science, they are not in logic! And we need logic in science. Logical Sequence Relation/Consequence/Field: is normally defined in terms of model theory: (Models are mE, semantic: a model is true or not true.) Even if one formulates them in a proven theoretical way ("there is a derivation", syntactically, or provable in a system) one needs mE or abstract objects: arbitrary sign sequences of symbol tokens and their arbitrary sequences. I 77 VsField: some have objected that only if we accept a Tarski Theory of truth do we need mE in mathematics. FieldVsVs: this led to the misunderstanding that without Tarskian truth mathematics would have no epistemic problems. Mathematics/Field: indeed implies mE itself, (only, we do not always need mathematics) without the help of the concept of truth, e.g. that there are prime numbers > 1000. I 138 Logic of Part-of-Relation/Field: has no complete evidence procedure. VsField: how can subsequent relations be useful then? Field: sure, the means by which we can know that something follows from something else are codifiable in an evidentiary procedure, and that seems to imply that no appeal to anything stronger than a proof can be of practical use. FieldVsVs: but you do not need to take any epistemic approach to more than a countable part of it. I 182 Field Theory/FT/Relationalism/Substantivalism/Some AuthorsVsField: justify the relevance of field theories for the dispute between S/R just the other way round: for them, FT make it easy to justify a relationalist view: (Putnam, 1981, Malament 1982): they postulate as a field with a single huge (because of the infinity of physical forces) and a corresponding part of it for each region. Variant: the field does not exist in all places! But all points in the field are not zero. FieldVsPutnam: I do not think you can do without regions. Field II 351 Indeterminacy/Undecidability/Set Theory/Number Theory/Field: Thesis: not only in the set theory but also in the number theory many undecidable sets do not have a certain truth value. Many VsField: 1. truth and reference are basically disquotational. Disquotational View/Field: is sometimes seen as eliminating indeterminacy for our present language. FieldVsVs: that is not the case :>Chapter 10 showed that. VsField: Even if there is indeterminacy in our current language also for disquotationalism, the arguments for it are less convincing from this perspective. For example, the question of the power of the continuum ((s)) is undecidable for us, but the answer could (from an objectivist point of view (FieldVs)) have a certain truth value. Uncertainty/Set Theory/Number Theory/Field: Recently some well-known philosophers have produced arguments for the impossibility of any kind of uncertainty in set theory and number theory that have nothing to do with disquotationalism: two variants: 1. Assuming that set theory and number theory are in full logic of the 2nd level (i.e. 2nd level logic, which is understood model theoretically, with the requirement that any legitimate interpretation) Def "full" in the sense that the 2nd level quantifiers go over all subsets of the 1st level quantifier range. 2. Let us assume that number theory and the set theory are formulated in a variant of the full logic of the 2nd level, which we could call "full schematic logic of level 1". II 354 Full schematic logic 1st Level/LavineVsField: denies that it is a partial theory of (non-schematic!) logic of the 2nd level. Field: we now better forget the 2nd level logic in favour of full schematic theories. We stay in the number theory to avoid complications. We assume that the certainty of the number theory is not in question, except for the use of full schemata. |
Field IV Hartry Field "Realism and Relativism", The Journal of Philosophy, 76 (1982), pp. 553-67 In Theories of Truth, Paul Horwich Aldershot 1994 |
Intuitionism | Wessel Vs Intuitionism | I 239 WesselVsIntuitionism: the limitation of negation to a specific field destroys logic as an independent science. But this can be solved in a universal system of rules. (see below). I 269 WesselVsIntuitionism: Main defect: that the universal character of logic is denied. Different logics for finite and infinite domains. Also the representatives of microphysics (quantum mechanics) propagate different domain logics. I 270 Wessel: this has to do with a wrong understanding of the object of logic: Logic/Wessel: a special science that investigates the properties of the rules of language. Science: Understands by the object of logic (erroneously) any extra-linguistic object (e.g. quantum, elementary particle, etc.). WesselVs: Dilemma: that this considered object is not directly given to the view, it must be constructed linguistically. But for this you need logic, circular. Negation/Intuitionism/Wessel: the intuitionists reject the negation of the classical calculus, but they should apply (our) non-traditional predication theory, which already takes into account the problem of undecidability. For example, the question whether a certain sequence of numbers occurs at some point in the development of the number π: here there are three possibilities: 1. it can occur (A) 2. it cannot occur (B) 3. it is impossible to determine (C) Suppose someone claims A, then two different negations are possible: 1. the assertion of B 2. the explanation that it is not right. Negation/WesselVsIntuitionism: confuses two different types of negation: the propositional (outer) and the negation in the operator of awarding predicates (--). I 271 Intuitionists/Logic/Wessel: accepts, like most classic logicians, the bisubjunction ~(s< P) ↔ (s --). But this is not a logical law. The differences between classical and intuitionist logic are mainly in negations, which are immediately before the statement variables. We now compare some formulas, using the character combinations that are actually meaningless: -i p, ?p etc. -i p: shall be ~(s <--) u ~(--P). I 272 In the class logic the de Morgan laws apply, the IntuitionismVsDe Morgan: Vs 3. and 4. law . 3. ~(p u q) > ~p v ~q 4 ~(~p v ~q) > p u q. Intuitionism/Wessel: is a hidden epistemic logic: "It is provable that p is provable or that ~p is provable". WesselVs: but first you have to have logical basic systems that are not dependent on empiricism! Epistemic predicates ("provable") must not be confused with logical operators! The classic paradoxes occur for the most part also in intuitionistic logic. I 273 There is evidence to show that there must be a number, but not the number itself! Example + One need not be a follower of intuitionism to prefer evidence that constructively provide the number. I 274 MT5. There is a group of formulae provable in the IPC (intuitionist propositional calculus) for which the following applies: some of their P-R are provable in PT and others are not...e.g. p > ~p > ~p p > ~q > (q > _p) ++ I 275 MT6. There is a group of formulas that can be proven in the IPC (intuitionist propositional calculus) to which applies; all their P-R are not provable in PT. E.g. ~(p v q) > ~p u ~q, ~~(p u ~p) WesselVsIntuitionism: MT5 and MT6 show that the intuitionists are inconsistent: if they identify s--P and _(s<--P), they would have to discard much more of the classic logic. |
Wessel I H. Wessel Logik Berlin 1999 |
Putnam, H. | Wright Vs Putnam, H. | I 58 "Putnam's Equivalence"/(Wright): P is true if and only if P could be justified under ideal epistemic circumstances. Convergence Demand/Putnam: no statement that is justified under epistemic ideal circumstances can be asserted simultaneously with its negation. Wright: this is of course to be distinguished from the requirement for completeness: not all questions can be decided (quantum mechanics). Wright: it seems here that even ideal epistemic circumstances cannot be neutral in relation to negation. ((s) Example (s) If the location of the electron cannot be fixed, that is not a negative statement about this or any other location.) I 59 Negation/Minimalism: requires the usual negation equivalence: "It is not the case that "P" is true if and only if it is not the case that "P" is true. This does not work for quantum mechanics. WrightVsPutnam: the examples from quantum mechanics or mathematics (undecidability) are deadly for Putnam's approach. (Example generalized continuum hypothesis). It certainly does not even apply to empirical statements a priori that each of them would be decidable under ideal circumstances. I 60 (Thus confirmable or refutable). A priori/Minimalism/Wright: the minimum platitudes probably apply a priori. WrightVsPutnam: so if Putnam's informal explanation would be a priori correct it has to be like this to be correct at all - then it would have to apply a priori that also the negation of a statement that cannot be justified under ideal circumstances (electron) would be justified. Wright: exactly this cannot be the case a priori. WrightVsPutnam: erroneously a priori claim. But it gets even worse: the extension of the argumentation destroys any attempt to determine truth as essentially independent of evidence (>quantum mechanics/Putnam). Anti-Realism, Semantic/Evidence: in contrast to Putnam, may now be satisfied with a "one-way street": (EC, epistemic restriction): EC If P is true, then there is evidence that it is. Evidence/WrightVsPutnam: Truth is limited by evidence. This leads to a revision of logic. I 64 WrightVsPutnam: he must make intuitive revisions. I 66 Def Truth/Peirce: that which is justified at an ideal limit of recognition when all empirical information has been obtained. PutnamVsPeirce: one simply cannot know when one has all the information! Wright ditto I 68/69 Def Superassertibility: a statement is superassertible if it is justified, or can be justified, and if its justification would survive both the arbitrary verification of its ancestry and arbitrary extensive additions and improvements to the information. Wright: For our purposes it is sufficient that the term is "relatively clear". |
WrightCr I Crispin Wright Truth and Objectivity, Cambridge 1992 German Edition: Wahrheit und Objektivität Frankfurt 2001 WrightGH I Georg Henrik von Wright Explanation and Understanding, New York 1971 German Edition: Erklären und Verstehen Hamburg 2008 |
![]() |
Disputed term/author/ism | Author![]() |
Entry![]() |
Reference![]() |
---|---|---|---|
Decidability | Field, Hartry | II X Field: Thesis: I am very reluctant to say that undecidable questions of number theory have no specific truth value. II 349 Goedel-Theorem/Undecidability/Truth Value/Field: Thesis: We have seen that the Goedel-Theorem gives no reason to think that some undecidable propositions have certain truth values. ((s) That would be an objectivist view >objectivism). II 351 Uncertainty/undecidability/set theory/number theory/Field: Thesis: not only in the set theory but also in the number theory many undecidable propositions do not have a certain truth value. |
|
![]() |