Disputed term/author/ism | Author |
Entry |
Reference |
---|---|---|---|
Behavior | Gray | Corr I 349 Behavior/Gray: Gray used the language of cybernetics (cf. Wiener 1948)(1) – the science of communication and control, comprising end-goals and feedback processes containing control of values within the system that guide the organism towards its final goal – in the form of a cns-CNS (conceptual nervous system/Central Nervous System >Terminology/Gray) bridge, to show how the flow of information and control of outputs is achieved (see also, Gray 2004)(2). >Reinforcement Sensitivity Theory/Gray, >Conceptual Nervous System/Gray. Gray faced two major problems: first, how to identify brain systems responsible for behaviour; and, secondly, how to characterize these systems once identified. The individual differences perspective is one major way of identifying major sources of variation in behaviour; by inference, there must be causal systems (i.e., sources) giving rise to observed variations in behaviour. Hans Eysenck’s (1947(3), 1957(4), 1967(5)) approach was to use multivariate statistical analysis to identify these major sources of variation in the form of personality dimensions. GrayVsEysenck: Gray accepted that this ‘top-down’ approach can identify the minimum number of sources of variation (i.e., the ‚extraction ‘extraction problem’ in factor analysis), but he argued that such statistical approaches can never resolve the correct orientation of these observed dimensions (i.e., the ‘rotation problem’ in factor analysis). Solution/Gray: „bottom-up“ approach: rested on other forms of evidence, including the effects of brain lesions, experimental brain research (e.g., intracranial self-stimulation studies), and, of most importance, the effects on behaviour of classes of drugs known to be effective in the treatment of psychiatric disorders. Transforming base pharmacological findings into a valuable neuropsychological theory. This was a subtle and clever way to expose the nature of fundamental emotion and motivation systems, especially those implicated in major forms of psychopathology. >Method/Gray, >Fear/Gray. 1. Wiener, N. 1948. Cybernetics, or control and communication in the animal and machine. Cambridge: MIT Press 2. Gray, J. A. 2004. Consciousness: creeping up on the Hard Problem. Oxford University Press 3. Eysenck, H. J. 1947. Dimensions of personality. London: K. Paul/Trench Trubner 4. Eysenck, H. J. 1957. The dynamics of anxiety and hysteria. New York: Preger 5. Eysenck, H. J. 1967. The biological basis of personality. Springfield, IL: Thomas Philip J. Corr, „ The Reinforcement Sensitivity Theory of Personality“, in: Corr, Ph. J. & Matthews, G. (eds.) 2009. The Cambridge handbook of Personality Psychology. New York: Cambridge University Press |
Corr I Philip J. Corr Gerald Matthews The Cambridge Handbook of Personality Psychology New York 2009 Corr II Philip J. Corr (Ed.) Personality and Individual Differences - Revisiting the classical studies Singapore, Washington DC, Melbourne 2018 |
Compositionality | Brandom | I 504f Compositionality/Frege/Brandom: the same substitutional path that leads from the inference to the conceptual content of sentences also leads from the free-standing inferential content of composite sentences to the embedded content of embedded parts of sentences and on the other hand back to singular terms and predicates. >Singular terms, >Predicates, >Frege principle. I 505 Neutral between bottom-up and top-down. I 506 BrandomVsFrege: blurs the distinction between freestanding and embedded contents. >Subsetentials. |
Bra I R. Brandom Making it exlicit. Reasoning, Representing, and Discursive Commitment, Cambridge/MA 1994 German Edition: Expressive Vernunft Frankfurt 2000 Bra II R. Brandom Articulating reasons. An Introduction to Inferentialism, Cambridge/MA 2001 German Edition: Begründen und Begreifen Frankfurt 2001 |
Deep Learning | Gropnik | Brockman I 224 Deep Learning/Gropnik: A. Bottom-up deep learning: In the 1980s, computer scientists devised an ingenious way to get computers to detect patterns in data: connectionist, or neural-network, architecture (the “neural” part was, and still is, metaphorical). The approach fell into the doldrums in the 1990s but has recently been revived with powerful “deep-learning” methods like Google’s DeepMind. E.g., give the program a bunch of Internet images labeled “cat” etc. The program can use that information to label new images correctly. Unsupervised learning: can detect data with no labels at all; these programs simply look for clusters of features (factor analysis.) Reinforcement learning: In the 1950s, B. F. Skinner, building on the work of John Watson, famously programmed pigeons to perform elaborate actions (…) by giving them a particular schedule of rewards and punishments. The essential idea was that actions that were rewarded would be repeated and those that were punished would not, until the desired behavior was achieved. Even in Skinner’s day, this simple process, repeated over and over, could lead to complex behavior. >Conditioning. E.g., researchers at Google’s DeepMind used a combination of deep learning and reinforcement learning to teach a computer to play Atari video games. The computer knew nothing about how the games worked. Brockman I 225 These bottom-up systems can generalize to new examples; they can label a Brockman I 226 new image as a cat fairly accurately over all. But they do so in ways quite different from how humans generalize. Some images almost identical to a cat image won’t be identified by us as cats at all. Others that look like a random blur will be. B. Top-down Bayesian Models: The early attempts to use this approach faced two kinds of problems. 1st Most patterns of evidence might in principle be explained by many different hypotheses: It’s possible that my journal email message is genuine, it just doesn’t seem likely. 2nd Where do the concepts that the generative models use come from in the first place? Plato and Chomsky said you were born with them. But how can we explain how we learn the latest concepts of science? Solution: Bayesian models combine generative models and hypothesis testing. >Bayesianism. A Bayesian model lets you calculate how likely it is that a particular hypothesis is true, given the data. And by making small but systematic tweaks to the models we already have, and testing them against the data, we can sometimes make new concepts and models from old ones. Brockman I 227 VsBaysianism: The Bayesian techniques can help you choose which of two hypotheses is more likely, but there are almost always an enormous number of possible hypotheses, and no system can efficiently consider them all. How do you decide which hypotheses are worth testing in the first place? Top-Down method: E.g., Brenden Lake a New York University and colleagues used top-down methods to solve a problem that is easy for people but extremely difficult for computers: recognizing unfamiliar handwritten characters. Bottom-up method: this method gives the computer thousands of examples (…) and lets it pull out the salient features. Top-down method: Lake et al. gave the program a general model of how you draw a character: A stroke goes ether right or left; after you finish one, you start another; and so on. When the program saw a particular character, it could infer the sequence of strokes that were most likely to have led to it (…). Then it could judge whether a new character was likely to result from that sequence or from a different one, and it could produce a similar set of strokes itself. The program worked much better than a deep-learning program applied to exactly the same data, and it closely mirrored the performance of human beings. Brockman I 228 Bottom-up: here, the program doesn’t need much knowledge to begin with, but it needs a great deal of data, and it can generalize only in a limited way. Top-down: here, the program can learn from just a few examples and make much broader and more varied generalizations, but you need to build much more into it to begin with. Brockman Learning in Children/Gropnik: (…) the truly remarkable thing about human children is that they somehow combine the best features of each approach and then go way beyond them. Over the past fifteen years, developmentalists have been exploring the way children learn structure from data. Four-year-olds can learn by taking just one or two examples of data, as a top-down system does, and generalizing to very different concepts. But they can also learn new concepts and models from the data itself, as a bottom-up system does. Young children rapidly learn abstract intuitive theories of biology, physics, and psychology in much the way adult scientists do, even with relatively little data. Gropnik, Alison “AIs versus Four-Year-Olds”, in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Democracy | Johnson | Morozov I 107 Democracy/Politics/Steven Johnson/Morozov: Steven Johnson celebrates in his Future Perfect(1) the advantages of switching to what he calls "liquid democracy": In a traditional democracy, citizens elect representatives to legislate on their behalf; in a liquid democracy, citizens do not have to vote for representatives - they can simply transfer their votes to those who, in their opinion, know better about the issue. I 108 Morozov: the idea is not new. Lewis Carroll already suggested something similar.(2) MorozovVsJohnson: this does not take into account the fact that the legislative process also includes discussion, negotiations, compromises and reflection. I 109 The model of Johnson and Miller (3) assumes that politics is only a kind of referendum. But such referendums only paralyze democracy (4). I 110 MorozovVsJohnson: he seems to think, just as we ask our friends where best to eat, we would do the same with political decisions. How strange! 1. St. Johnson, Future Perfect: The Case for Progress in a Networked Age (New York: Penguin, 2012), 170 2. Lewis Carroll, The Principles of Parliamentary Representation (London: Harrison and Sons, 1884). 3. James C. Miller, “A Program for Direct and Proxy Voting in the Legislative Process,” Public Choice 7, no. 1 (1969): 107– 113. 4. see Yannis Papadopoulos, “Analysis of Functions and Dysfunctions of Direct Democracy: Top-Down and Bottom-Up Perspectives,” Politics & Society 23 (December 1995): 421– 448. |
JohnsonSt I Steven Johnson Future Perfect: The Case For Progress In A Networked Age New York 2012 Morozov I Evgeny Morozov To Save Everything, Click Here: The Folly of Technological Solutionism New York 2014 |
Holism | Esfeld | I 16 ~ Holism/Esfeld: e.g. social community: a social community is more than the dependence of the thinking of others. Social: the social is not rigidly dependent: members die, new members come. A social role as a business man only is a part of the community. Generic: any other, but not a certain thing must exist. Not holistic: purely functional characterized systems are not holistic: e.g. traffic lights exist and function also without traffic and vice versa. I 29 Holism/characteristics/Esfeld: holism is not "this individual", not a disjoint (e.g. "round or angular"). It could be intrinsic or relational (more than causal). It is not correct to say: "the property to be a system (holistic system)". An Arrangement (that is causal itself) is not enough, but an interaction. Relational: there must be at least one thing with which it has no common parts. Also, to be alone is a relational property/Lewis: holistic properties form family. They do not have to be the same for every part of the system: e.g. heart/kidney. Holistic properties are relational (the arrangement is already assumed). They do not have to be intrinsic (e.g. natural numbers). I 28 Causation: causation is not enough, even properties which are the cause of things, can be intrinsic. They are ontological and not description dependent. Parts: e.g. bones are not holistic, but humans for social system are. Bones do not make up a part for a community. The holistic part is not transitive - the part is more narrow than in mereology. >Mereology, >Part-of-relation, >Parts. I 36 Arrangement property: an arrangement property is not enough: to be a heart is an arrangement property, e.g. a heart which the butcher sells, otherwise it is no heart anymore. Therefore the functional definition is not a holistic criterion. A holistic property cannot be detected in a description which can have the parts in isolation. I 42 Type A bottom-up: every constituent must have a few holistic properties: every belief is, as far as it has conceptual content dependent on other beliefs (e.g. social holism). Type B: holistic properties primarily belong to the system as a whole: e.g. conceptual content, confirmation, justification (e.g. quantum holism). Semantic holism: A or B is possible. I 50 Confirmation holism leads to semantic holism. Two dogmas: two dogmas represent both. >Two Dogmas, >Confirmation. I 366ff Holism/Esfeld: can we merge holism of physics and holism of philosophy of the mind? No, we can only follow them in one area and exclude the other. Belief-holism: can only take into account the conceptual area (quasi everyday language), not the quantum mechanical. Quantum holism is fixed on epistemic self-sufficiency and representationalism. >Quantum mechanics. Epistemic self-sufficiency equals internalism: belief states are independent of physical nature (intentional states can be the same in other environments) I 383 Holism/tradition: in the tradition of holism stand Parmenides, Spinoza and Bradley. >B. Spinoza, >Parmenides, >F.H. Bradley. Esfeld: Esfeld retains a revised Cartesianism. >Cartesianism, >R. Descartes. |
Es I M. Esfeld Holismus Frankfurt/M 2002 |
Identification | Tugendhat | I 395 Identification/TugendhatVsStrawson: uses identification in the narrow sense. >Frederick Strawson. Tugendhat: my own notion "specification" (which of all objects is meant) is superior to this concept. >Specification. "picking out" (to pick put) is Strawson's expression - (assumed from Searle) - (Quine: "to specify"). I 400ff Identification/Identification/Tugendhat: space-time-location: this is an object - Specification: reference, stand for (another term) (in front of background of all other objects). >Reference, >Background. I 415 Identification/particular/TugendhatVsStrawson: space-time-relation not only anchored perceptively but also system of possible perception stand points - thus a system of demonstrative specification (in front of background). >Space, >Spacetime. I 417 Trough space-time description the perceptible object is specified as more perceptible - an essentially perceptibable cannot be the previous object who it is. Reference: is then to specify a verification situation. >Verification. I 422 Distinguishing objects only from variable usage situations of perception predicates. I 426 Particular/Identification/TugendhatVsStrawson: "here", "now" suffice as object to make space-time locations existent. >Demonstratives, >Index words, >Indexicality. Space-time-locations are the most elementary objects - but there must also be something - at least hypothetically, then the corresponding question of verification provides, for which object the singular term stands. >Singular terms, >Objects. Top-down: the use of all singular terms refers to demonstrative expressions - bottom-up: if the verification situation for the applicability of the predicate is described by demonstratives. I 436 Localization/identification/Tugendhat: only by several speakers - not zero point, but set of surrounding objects. - The subjective zero point may be the own position. >Subjectivity. I 462 Identification/Tugendhat: spatial and temporal relation between objects insufficient - an infinite number of space-time locations, finitely many objects - presupposing space-time system - reference to space-time-points cannot fail. Talk of existence without location is pointless. - Identification only by simultaneous reference to all other (possible) objects. - Therefore existence sentences are general. >Existence, >Existence statements. |
Tu I E. Tugendhat Vorlesungen zur Einführung in die Sprachanalytische Philosophie Frankfurt 1976 Tu II E. Tugendhat Philosophische Aufsätze Frankfurt 1992 |
Language | Davidson | I (e) 113 Language/Davidson: Conventions and rules do not explain language, language explains them. >Rules, >Conventions, >Explanation. Glüer II 54 Thesis: the term of language is superfluous. There is no such thing as a language, at least not in the sense that many philosophers and linguists claim. Rorty II 21 Davidson/Rorty: "How language works" has little to do with the question "how knowledge works." DavidsonVsTradition/Rorty: Language is no instrumental character system, neither of expression nor representation. Davidson: There is no such thing as a language, there is nothing you can learn or master. (These are rather provisional theories). There are no conventions how we communicate! Davidson: we should come to worship no one at all, everything, our language, consciousness, community, are products of time and chance. Brandom I 922 Language/Davidson: is merely practical, hypothetical necessity, convenient for the community to have it - decisive: how someone would like to be understood - not to make up content before mutual interpretations. >Content, >Propositional content, >Interpretation, >Radical interpretation. Brandom I 518 Language Davidson: interprets linguistic expressions as an aspect of the intentional interpretation of actions - pro top down - Tarski: whether top-down or bottom-up. Glüer II 51 Language/Davidson: each is accessible through the causal relationships - this ultimately irrelevant for the truth-theory, which is the actual spoken language. >Truth theory. Brandom I 454 Language/Davidson/Rorty: is not a conceptual schema, but causal interaction with the environment - described by the radical interpretation. Then one can no longer ask whether the language "fits" into the world. >Conceptual scheme. Rorty III 33 Language/DavidsonVsTradition/Rorty: Language is not medium, neither of expression nor of representation. - Wrong questions: e.g. "What place have values?" - E.g. "Are colors more conscious dependent than weights?" - Correct: "Does our use of these words stand in the way of our use of other words?" >Use. Rorty VI 133 Language/Davidson/Rorty: There is no such thing as a language. (> Davidson, "A Nice Derangement of Epitaphs")(1): there is no set of conventions that one would have to learn when one learns to speak. No abstract structure that must be internalized. Seel III 28 Language/Davidson: Thesis: Language is not a medium - but mind without world and world without mind are empty concepts. Language does not stand between us and the world - seeing: we do not see through the eyes but with them. VsMentalese/language of thought: does not exist. - Language is a part of us. - It is an organ of us. - It is the way we have the world. >Mentalese. Medium/Davidson/Seel: here use is very narrow. Medium/Gadamer: is not an instrument, but an indispensable element of thought. 1. Davidson, D. "A Nice Derangement of Epitaphs" in: LePore, E. (ed.) Truth and Interpretation. Perspectives on the Philosophy of Donald Davidson, New York 1986. |
Davidson I D. Davidson Der Mythos des Subjektiven Stuttgart 1993 Davidson I (a) Donald Davidson "Tho Conditions of Thoughts", in: Le Cahier du Collège de Philosophie, Paris 1989, pp. 163-171 In Der Mythos des Subjektiven, Stuttgart 1993 Davidson I (b) Donald Davidson "What is Present to the Mind?" in: J. Brandl/W. Gombocz (eds) The MInd of Donald Davidson, Amsterdam 1989, pp. 3-18 In Der Mythos des Subjektiven, Stuttgart 1993 Davidson I (c) Donald Davidson "Meaning, Truth and Evidence", in: R. Barrett/R. Gibson (eds.) Perspectives on Quine, Cambridge/MA 1990, pp. 68-79 In Der Mythos des Subjektiven, Stuttgart 1993 Davidson I (d) Donald Davidson "Epistemology Externalized", Ms 1989 In Der Mythos des Subjektiven, Stuttgart 1993 Davidson I (e) Donald Davidson "The Myth of the Subjective", in: M. Benedikt/R. Burger (eds.) Bewußtsein, Sprache und die Kunst, Wien 1988, pp. 45-54 In Der Mythos des Subjektiven, Stuttgart 1993 Davidson II Donald Davidson "Reply to Foster" In Truth and Meaning, G. Evans/J. McDowell Oxford 1976 Davidson III D. Davidson Essays on Actions and Events, Oxford 1980 German Edition: Handlung und Ereignis Frankfurt 1990 Davidson IV D. Davidson Inquiries into Truth and Interpretation, Oxford 1984 German Edition: Wahrheit und Interpretation Frankfurt 1990 Davidson V Donald Davidson "Rational Animals", in: D. Davidson, Subjective, Intersubjective, Objective, Oxford 2001, pp. 95-105 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 D II K. Glüer D. Davidson Zur Einführung Hamburg 1993 Rorty I Richard Rorty Philosophy and the Mirror of Nature, Princeton/NJ 1979 German Edition: Der Spiegel der Natur Frankfurt 1997 Rorty II Richard Rorty Philosophie & die Zukunft Frankfurt 2000 Rorty II (b) Richard Rorty "Habermas, Derrida and the Functions of Philosophy", in: R. Rorty, Truth and Progress. Philosophical Papers III, Cambridge/MA 1998 In Philosophie & die Zukunft, Frankfurt/M. 2000 Rorty II (c) Richard Rorty Analytic and Conversational Philosophy Conference fee "Philosophy and the other hgumanities", Stanford Humanities Center 1998 In Philosophie & die Zukunft, Frankfurt/M. 2000 Rorty II (d) Richard Rorty Justice as a Larger Loyalty, in: Ronald Bontekoe/Marietta Stepanians (eds.) Justice and Democracy. Cross-cultural Perspectives, University of Hawaii 1997 In Philosophie & die Zukunft, Frankfurt/M. 2000 Rorty II (e) Richard Rorty Spinoza, Pragmatismus und die Liebe zur Weisheit, Revised Spinoza Lecture April 1997, University of Amsterdam In Philosophie & die Zukunft, Frankfurt/M. 2000 Rorty II (f) Richard Rorty "Sein, das verstanden werden kann, ist Sprache", keynote lecture for Gadamer’ s 100th birthday, University of Heidelberg In Philosophie & die Zukunft, Frankfurt/M. 2000 Rorty II (g) Richard Rorty "Wild Orchids and Trotzky", in: Wild Orchids and Trotzky: Messages form American Universities ed. Mark Edmundson, New York 1993 In Philosophie & die Zukunft, Frankfurt/M. 2000 Rorty III Richard Rorty Contingency, Irony, and solidarity, Chambridge/MA 1989 German Edition: Kontingenz, Ironie und Solidarität Frankfurt 1992 Rorty IV (a) Richard Rorty "is Philosophy a Natural Kind?", in: R. Rorty, Objectivity, Relativism, and Truth. Philosophical Papers Vol. I, Cambridge/Ma 1991, pp. 46-62 In Eine Kultur ohne Zentrum, Stuttgart 1993 Rorty IV (b) Richard Rorty "Non-Reductive Physicalism" in: R. Rorty, Objectivity, Relativism, and Truth. Philosophical Papers Vol. I, Cambridge/Ma 1991, pp. 113-125 In Eine Kultur ohne Zentrum, Stuttgart 1993 Rorty IV (c) Richard Rorty "Heidegger, Kundera and Dickens" in: R. Rorty, Essays on Heidegger and Others. Philosophical Papers Vol. 2, Cambridge/MA 1991, pp. 66-82 In Eine Kultur ohne Zentrum, Stuttgart 1993 Rorty IV (d) Richard Rorty "Deconstruction and Circumvention" in: R. Rorty, Essays on Heidegger and Others. Philosophical Papers Vol. 2, Cambridge/MA 1991, pp. 85-106 In Eine Kultur ohne Zentrum, Stuttgart 1993 Rorty V (a) R. Rorty "Solidarity of Objectivity", Howison Lecture, University of California, Berkeley, January 1983 In Solidarität oder Objektivität?, Stuttgart 1998 Rorty V (b) Richard Rorty "Freud and Moral Reflection", Edith Weigert Lecture, Forum on Psychiatry and the Humanities, Washington School of Psychiatry, Oct. 19th 1984 In Solidarität oder Objektivität?, Stuttgart 1988 Rorty V (c) Richard Rorty The Priority of Democracy to Philosophy, in: John P. Reeder & Gene Outka (eds.), Prospects for a Common Morality. Princeton University Press. pp. 254-278 (1992) In Solidarität oder Objektivität?, Stuttgart 1988 Rorty VI Richard Rorty Truth and Progress, Cambridge/MA 1998 German Edition: Wahrheit und Fortschritt Frankfurt 2000 Bra I R. Brandom Making it exlicit. Reasoning, Representing, and Discursive Commitment, Cambridge/MA 1994 German Edition: Expressive Vernunft Frankfurt 2000 Bra II R. Brandom Articulating reasons. An Introduction to Inferentialism, Cambridge/MA 2001 German Edition: Begründen und Begreifen Frankfurt 2001 Seel I M. Seel Die Kunst der Entzweiung Frankfurt 1997 Seel II M. Seel Ästhetik des Erscheinens München 2000 Seel III M. Seel Vom Handwerk der Philosophie München 2001 |
Logic | Brandom | I 164 Logic/Brandom: is not only restrict to formally valid inferences. BrandomVsFormalism: one should assume silent premises and implicit logic rules with everyone - Dummett: one should not define logical consequences in concepts of logical truth. I 167 Achilles and the tortoise/Carroll: however, some inferential definitions must be implied. - There must be rules, not only truths. >Rules. --- II 47 Tells us something about the conceptual contents task: not proving something - the formal accuracies are derived from the material accuracies, which contain much more non-logical vocabulary. --- I 175 Logic/Frege/Brandom: the task is an expressive one: not to prove something, but to say it - even in science concepts are formed arbitrarily - Goal: not a certain kind of truth but of inferences. I 176 Conceptual contents are considered to be identified through their inferential role - which requires that one can speak meaningfully about consequences, even before a specific logical vocabulary is introduced. >Inferential role. I 542 Logic/Brandom: the use of identity and quantifiers requires the use of singular terms and predicates. >Quantifiers, >Singular terms, >Predicates. Terms (symmetric) must be interchangeable (identity) - predicates (asymmetric) must provide the frame for expressing incompatibilities - BrandomVsFormalism: Accuracies of inference are not always the same as logical correctness. --- II 24 Logic/tradition: bottom-up: from the analysis of the meanings of the singular terms to the judgments. II 25 Brandom, New: top-down: Pragmatism: first, the use of terms - ((s) always in complete sentences.) |
Bra I R. Brandom Making it exlicit. Reasoning, Representing, and Discursive Commitment, Cambridge/MA 1994 German Edition: Expressive Vernunft Frankfurt 2000 Bra II R. Brandom Articulating reasons. An Introduction to Inferentialism, Cambridge/MA 2001 German Edition: Begründen und Begreifen Frankfurt 2001 |
Multi-valued Logic | Brandom | I 487 Multi-valued logic/Brandom: Definition designated: the fact that a statement has any truth value at all. >Truth values. Designation indicates what truth is. >Truth. designated: requires a definition on the assertion. Definition Multi-valued: embedded content - ((s) a particular one of several possible truth values). Interpretation: assigns two types of value: a) whether designated, b) which multi- value. Standard situation: it is defined which multi-values are designated. Designation value: everything that plays a role for pragmatic significance of free-standing sentences. bottom-up: Designation > formal validity Basic principle: the substitution never changes with the same multi-value designation. I 488 Multi-values = equivalence classes from logically derivable sentences - Designation = logical validity. >Validity. |
Bra I R. Brandom Making it exlicit. Reasoning, Representing, and Discursive Commitment, Cambridge/MA 1994 German Edition: Expressive Vernunft Frankfurt 2000 Bra II R. Brandom Articulating reasons. An Introduction to Inferentialism, Cambridge/MA 2001 German Edition: Begründen und Begreifen Frankfurt 2001 |
Particulars | Tugendhat | I 422 Particulars/TugendhatVsDonnellan: localizing identifications are fundamental. Cf. >Individuation/Strawson, >Individuation, >Identification, >Localization. With these, there is no longer a distinction between referential and attributive use. >Attributive/referential. Attributive is also referential in a broad sense because, although it does not identify the object, it specifies it (distinguishes it against a background). >Specification. I 426 Einzelding/Identification/TugendhatVsStrawson: "here", "now" suffice to make object and spacetime places existent. >Demonstratives, >Logical proper names. Spacetime places are the most elementary objects. >Ontology. But there must be something there - at least hypothetically, then corresponding question of verification provides for which object the singular term stands. >Singular terms, >Empty space, >Substantivalism, >Relationism. Top-down: the use of all singular terms refers to demonstrative expressions. Bottom-up: when demonstratives denote the verificational situation for the predicate to be true. >Predicates, >Satisfaction, >Situation. |
Tu I E. Tugendhat Vorlesungen zur Einführung in die Sprachanalytische Philosophie Frankfurt 1976 Tu II E. Tugendhat Philosophische Aufsätze Frankfurt 1992 |
Personality Traits | Allport | Corr II 29 Trait-names/personality traits/lexicon/study background/ Allport/Odbert/Saucier: The essence of [Allport’s and Odbert’s article ‘Trait-names: A psycho-lexical study’] was a classification of (…) English ‘trait-name’ words (terms distinguishing the behavior of one human being from another) into four categories. (…) from a scientific standpoint, some of the most basic personality attributes might be discovered from studying conceptions implicit in use of the natural language. If a distinction is highly represented in the lexicon – and found in any dictionary – it can be presumed to have practical importance. This is because the degree of representation of an attribute in language has some correspondence with the general importance of the attribute in real-world transactions. Therefore, when a scientist identifies personality attributes that are strongly represented in the natural language, that scientist is simultaneously identifying what may be the most important attributes. >H.S. Odbert, >G. Allport. II 30 Study Design/Allport/Odbert: Allport and Odbert turned to Webster’s New International Dictionary (1925)(1), a compendium of approximately 400,000 separate terms. Combining judgments of three investigators (themselves plus a person designated only as ‘AL’, (…)), they built a list of 17,953 trait-names in the English language that drew on the following criterion for inclusion: ‘the capacity of any term to distinguish the behavior of one human being from that of another’ (p. 24) (1). Allport and Odbert went further and differentiated terms into four categories or columns. The (…) terms in Column I were ‘neutral terms designating possible II 31 personal traits’ (p. 38)(1), more specifically defined as ‘generalized and personalized determining tendencies – consistent and stable modes of an individual’s adjustment’ to his/her environment (p. 26)(1). The (…) terms in Column II were ‘terms primarily descriptive of temporary moods or activities’ (…). The (…) terms in Column III were ‘weighted terms conveying social and characterial judgments of personal conduct, or designated influence on others’ (p. 27)(1) (…).The other (…) terms fell into the miscellaneous category in Column IV, labeled as ‘metaphorical and doubtful terms’ (p. 38)(1). This last grab-bag category included terms describing physical characteristics and various abilities (…). II 33 Findings/Allport/Odbert: 1. Allport and Odbert cogently argue that, basically, normal human life cannot proceed without some reference to personality dispositions. There is no better argument than their trenchant words from the monograph: “Even the psychologist who inveighs against traits, and denies that their symbolic existence conforms to ‘real existence’ will nevertheless write a convincing letter of recommendation to prove that one of his favorite students is ‘trustworthy, self-reliant, and keenly critical’” (pp. 4–5)(1). 2. Allport and Odbert indicate that the dispositions to which trait-names refer are more than conversational artifact, a form of everyday error (though in part they may be that). They are to some degree useful for understanding and prediction, as confirmed by later research (Roberts et al., 2007)(3). [The follow-on assertion constitutes that] the degree of representation of an attribute in language has some correspondence with the general importance of the attribute in real-world transactions. II 34 3. (…) science can lean on and build on the body of commonsense concepts in language. Rather than relying exclusively on the top-down gambits of theorists, there is opportunity for a generative bottom-up approach. II 35 4. (…) Allport and Odbert recognized a difficulty inherent in personality language: trait-names mean different things to different people. To a degree, these meanings are contingent on one’s ‘habits of thought’ (p. 4)(1). One reason builds on the polysemy (multiple distinct meanings) that many words have. 5. Within science, the difficulty might be even further resolved by explicit communication and consensus. For Allport and Odbert, this meant naming traits in a careful and logical way, and not merely codifying but also ‘purifying’ natural-language terminology (p. vi)(1). II 36 6. Allport and Odbert’s prime interest was in tendencies that are ‘consistent and stable modes of an individual’s adjustment to his environment’ rather than ‘merely temporary and specific behavior’ (p. 26)(1). 7. (…) trait-names reflect a combination of the biophysical influences and something more cultural (perhaps historically varying). (…) characterizations of human qualities are determined partly by ‘standards and interests peculiar to the times’ (p. 2)(1) in a particular social epoch. [In this way] culture, trait-names are partly ‘invented in accordance with cultural demands’ (p. 3)(1). II 37 VsAllport/VsOdbert: 1. (…) they ignore and give short-shrift to culture, both with regard to issues of cross-cultural generalizability and of how traits themselves may reflect culture-relevant contents. 2. According to their distinctive ‘trait hypothesis’ (p. 12), no two persons ‘possess precisely the same trait’ (p. 14)(1) and each ‘individual differs in every one of his traits from every other individual’ (p. 18)(1). The problem is not that individualism is wrong; rather, it may be ethnocentric to impose an individualistic filter throughout personality psychology, and in fact such idiothetic approaches are outside the mainstream of current and recent personality psychology. II 38 3. Another aspect of the thinking (…) that might appear odd, in retrospect, is the notion of a single, cardinal trait that provides determining tendencies in an individual life. (…) a particular attribute becomes so pervasive in a person that it becomes a distinct focus of organization. Seventy years later, there seems still to be a lack of evidence for cardinal traits that perform a more or less hostile take-over, coming to determine and structure the remainder of the personality system. II 39 4. Allport and Odbert argue for the desirability of neutral terminology in science. Unfortunately, it appears that they extend the desire for unweighted emotion-free vocabulary into the very attribute-contents evident in the trait-names in language, with confusing consequences. On this view, the trait-names in language that are judgmental and ‘emotionally toned’ (p. v)(1), having affective polarity, are suspect and less worthy of study than the neutral ones. But affectively toned concepts like evil and virtue are particularly worthy of study particularly because of their extreme affective tone (…). II 40 5. (…) the numerically largest category of trait-names was social evaluation. However, they offer no account for why the third column – reflecting social judgments likely unconnected with biophysical traits – would be the biggest component in person perception. 6. (…) the notion that censorial and moral terms – and virtues, II 41 vices, whatever is associated with blame or praise, not to mention social effects – have no use for a psychologist seems now obsolete. 7. To accept at face value the particular Allport and Odbert classification of trait-names into four categories is to take on the assumptions of a specialized theory of traits, whose main propositions can be construed based on the classification itself. (…) attention to emotions and morality would distract us from the central aspects of personality which reflect enduring consistencies operating intrinsically in the person, and outside the influence of society (…). 1. Webster’s new international dictionary of the English language (1925). Springfield, MA: Merriam. 2. Allport, G. W., & Odbert, H. S. (1936). Trait-names: A psycho-lexical study. Psychological Monographs, 47 (1, Whole No. 211). 3. Roberts, B. W., Kuncel, N. R., Shiner, R., Caspi, A., & Goldberg, L. R. (2007). The power of personality: The comparative validity of personality traits, socioeconomic status, and cognitive ability for predicting important life outcomes. Perspectives on Psychological Science, 2, 313–345. Saucier, Gerard: “Classification of Trait-Names Revisiting Allport and Odbert (1936)”, In: Philip Corr (Ed.), 2018. Personality and Individual Differences. Revisiting the classical studies. Singapore, Washington DC, Melbourne: Sage, pp. 29-45. |
Corr I Philip J. Corr Gerald Matthews The Cambridge Handbook of Personality Psychology New York 2009 Corr II Philip J. Corr (Ed.) Personality and Individual Differences - Revisiting the classical studies Singapore, Washington DC, Melbourne 2018 |
Personality Traits | Odbert | Corr II 29 Trait-names/personality traits/lexicon/study background/ Allport/Odbert/Saucier: The essence of [Allport’s and Odbert’s article ‘Trait-names: A psycho-lexical study’] was a classification of (…) English ‘trait-name’ words (terms distinguishing the behavior of one human being from another) into four categories. >Lexical hypothesis, >Lexical studies. (…) from a scientific standpoint, some of the most basic personality attributes might be discovered from studying conceptions implicit in use of the natural language. >Everyday language, >Concepts, >Language use, >Language community, >Personality, >Character traits. If a distinction is highly represented in the lexicon – and found in any dictionary – it can be presumed to have practical importance. This is because the degree of representation of an attribute in language has some correspondence with the general importance of the attribute in real-world transactions. Therefore, when a scientist identifies personality attributes that are strongly represented in the natural language, that scientist is simultaneously identifying what may be the most important attributes. >Relevance. II 30 Study Design/Allport/Odbert: Allport and Odbert turned to Webster’s New International Dictionary (1925)(1), a compendium of approximately 400,000 separate terms. Combining judgments of three investigators (themselves plus a person designated only as ‘AL’, (…)), they built a list of 17,953 trait-names in the English language that drew on the following criterion for inclusion: ‘the capacity of any term to distinguish the behavior of one human being from that of another’ (p. 24) (1). Allport and Odbert went further and differentiated terms into four categories or columns. The (…) terms in Column I were ‘neutral terms designating possible II 31 personal traits’ (p. 38)(1), more specifically defined as ‘generalized and personalized determining tendencies – consistent and stable modes of an individual’s adjustment’ to his/her environment (p. 26)(1). The (…) terms in Column II were ‘terms primarily descriptive of temporary moods or activities’ (…). The (…) terms in Column III were ‘weighted terms conveying social and characterial judgments of personal conduct, or designated influence on others’ (p. 27)(1) (…).The other (…) terms fell into the miscellaneous category in Column IV, labeled as ‘metaphorical and doubtful terms’ (p. 38)(1). This last grab-bag category included terms describing physical characteristics and various abilities (…). II 33 Findings/Allport/Odbert: 1. Allport and Odbert cogently argue that, basically, normal human life cannot proceed without some reference to personality dispositions. There is no better argument than their trenchant words from the monograph: “Even the psychologist who inveighs against traits, and denies that their symbolic existence conforms to ‘real existence’ will nevertheless write a convincing letter of recommendation to prove that one of his favorite students is ‘trustworthy, self-reliant, and keenly critical’” (pp. 4–5)(1). 2. Allport and Odbert indicate that the dispositions to which trait-names refer are more than conversational artifact, a form of everyday error (though in part they may be that). They are to some degree useful for understanding and prediction, as confirmed by later research (Roberts et al., 2007)(3). [The follow-on assertion constitutes that] the degree of representation of an attribute in language has some correspondence with the general importance of the attribute in real-world transactions. >Dispositions, >Representation. II 34 3. (…) science can lean on and build on the body of commonsense concepts in language. Rather than relying exclusively on the top-down gambits of theorists, there is opportunity for a generative bottom-up approach. II 35 4. (…) Allport and Odbert recognized a difficulty inherent in personality language: trait-names mean different things to different people. To a degree, these meanings are contingent on one’s ‘habits of thought’ (p. 4)(1). One reason builds on the polysemy (multiple distinct meanings) that many words have. >Conventions, >Language use, >Language community, >Meaning, >Reference. 5. Within science, the difficulty might be even further resolved by explicit communication and consensus. For Allport and Odbert, this meant naming traits in a careful and logical way, and not merely codifying but also ‘purifying’ natural-language terminology (p. vi)(1). II 36 6. Allport and Odbert’s prime interest was in tendencies that are ‘consistent and stable modes of an individual’s adjustment to his environment’ rather than ‘merely temporary and specific behavior’ (p. 26)(1). 7. (…) trait-names reflect a combination of the biophysical influences and something more cultural (perhaps historically varying). (…) characterizations of human qualities are determined partly by ‘standards and interests peculiar to the times’ (p. 2)(1) in a particular social epoch. [In this way] culture, trait-names are partly ‘invented in accordance with cultural demands’ (p. 3)(1). II 37 VsAllport/VsOdbert: 1. (…) they ignore and give short-shrift to culture, both with regard to issues of cross-cultural generalizability and of how traits themselves may reflect culture-relevant contents. 2. According to their distinctive ‘trait hypothesis’ (p. 12), no two persons ‘possess precisely the same trait’ (p. 14)(1) and each ‘individual differs in every one of his traits from every other individual’ (p. 18)(1). The problem is not that individualism is wrong; rather, it may be ethnocentric to impose an individualistic filter throughout personality psychology, and in fact such idiothetic approaches are outside the mainstream of current and recent personality psychology. II 38 3. Another aspect of the thinking (…) that might appear odd, in retrospect, is the notion of a single, cardinal trait that provides determining tendencies in an individual life. (…) a particular attribute becomes so pervasive in a person that it becomes a distinct focus of organization. Seventy years later, there seems still to be a lack of evidence for cardinal traits that perform a more or less hostile take-over, coming to determine and structure the remainder of the personality system. II 39 4. Allport and Odbert argue for the desirability of neutral terminology in science. Unfortunately, it appears that they extend the desire for unweighted emotion-free vocabulary into the very attribute-contents evident in the trait-names in language, with confusing consequences. On this view, the trait-names in language that are judgmental and ‘emotionally toned’ (p. v)(1), having affective polarity, are suspect and less worthy of study than the neutral ones. But affectively toned concepts like evil and virtue are particularly worthy of study particularly because of their extreme affective tone (…). II 40 5. (…) the numerically largest category of trait-names was social evaluation. However, they offer no account for why the third column – reflecting social judgments likely unconnected with biophysical traits – would be the biggest component in person perception. 6. (…) the notion that censorial and moral terms – and virtues, II 41 vices, whatever is associated with blame or praise, not to mention social effects – have no use for a psychologist seems now obsolete. 7. To accept at face value the particular Allport and Odbert classification of trait-names into four categories is to take on the assumptions of a specialized theory of traits, whose main propositions can be construed based on the classification itself. (…) attention to emotions and morality would distract us from the central aspects of personality which reflect enduring consistencies operating intrinsically in the person, and outside the influence of society (…). 1. Webster’s new international dictionary of the English language (1925). Springfield, MA: Merriam. 2. Allport, G. W., & Odbert, H. S. (1936). Trait-names: A psycho-lexical study. Psychological Monographs, 47 (1, Whole No. 211). 3. Roberts, B. W., Kuncel, N. R., Shiner, R., Caspi, A., & Goldberg, L. R. (2007). The power of personality: The comparative validity of personality traits, socioeconomic status, and cognitive ability for predicting important life outcomes. Perspectives on Psychological Science, 2, 313–345. Saucier, Gerard: “Classification of Trait-Names Revisiting Allport and Odbert (1936)”, In: Philip Corr (Ed.), 2018. Personality and Individual Differences. Revisiting the classical studies. Singapore, Washington DC, Melbourne: Sage, pp. 29-45. |
Corr I Philip J. Corr Gerald Matthews The Cambridge Handbook of Personality Psychology New York 2009 Corr II Philip J. Corr (Ed.) Personality and Individual Differences - Revisiting the classical studies Singapore, Washington DC, Melbourne 2018 |
Prior Knowledge | Norvig | Norvig I 777 Prior knowledge/AI Research/Norvig/Russell: To understand the role of prior knowledge, we need to talk about the logical relationships among hypotheses, example descriptions, and classifications. Let Descriptions denote the conjunction of all the example descriptions in the training set, and let Classifications denote the conjunction of all the example classifications. Then a Hypothesis that “explains the observations” must satisfy the following property (recall that |= means “logically entails”): Hypothesis ∧ Descriptions |= Classifications. Entailment constraint: We call this kind of relationship an entailment constraint, in which Hypothesis is the “un-known.” Pure inductive learning means solving this constraint, where Hypothesis is drawn from some predefined hypothesis space. >Hypotheses/AI Research. Software agents/knowledge/learning/Norvig: The modern approach is to design agents that already know something and are trying to learn some more. An autonomous learning agent that uses background knowledge must somehow obtain the background knowledge in the first place (…). This method must itself be a learning process. The agent’s life history will therefore be characterized by cumulative, or incremental, development. Norvig I 778 Learning with background knowledge: allows much faster learning than one might expect from a pure induction program. Explanation based learning/EBL: the entailment constraints satisfied by EBL are the following: Hypothesis ∧ Descriptions |= Classifications Background |= Hypothesis. Norvig I 779 (…) it was initially thought to be a way to learn from examples. But because it requires that the background knowledge be sufficient to explain the hypothesis, which in turn explains the observations, the agent does not actually learn anything factually new from the example. The agent could have derived the example from what it already knew, although that might have required an unreasonable amount of computation. EBL is now viewed as a method for converting first-principles theories into useful, special purpose knowledge. Relevance/observations/RBL: the prior knowledge background concerns the relevance of a set of features to the goal predicate. This knowledge, together with the observations, allows the agent to infer a new, general rule that explains the observations: Hypothesis ∧ Descriptions |= Classifications , Background ∧ Descriptions ∧ Classifications |= Hypothesis. We call this kind of generalization relevance-based learning, or RBL. (…) whereas RBL does make use of the content of the observations, it does not produce hypotheses that go beyond the logical content of the background knowledge and the observations. It is a deductive form of learning and cannot by itself account for the creation of new knowledge starting from scratch. Entailment constraint: Background ∧ Hypothesis ∧ Descriptions |= Classifications. That is, the background knowledge and the new hypothesis combine to explain the examples. Knowledge-based inductive learning/KBIL algorithms: Algorithms that satisfy [the entailment] constraint are called knowledge-based inductive learning, or KBIL, algorithms. KBIL algorithms, (…) have been studied mainly in the field of inductive logic programming, or ILP. Norvig I 780 Explanation-based learning: The basic idea of memo functions is to accumulate a database of input–output pairs; when the function is called, it first checks the database to see whether it can avoid solving the problem from scratch. Explanation-based learning takes this a good deal further, by creating general rules that cover an entire class of cases. Norvig I 781 General rules: The basic idea behind EBL is first to construct an explanation of the observation using prior knowledge, and then to establish a definition of the class of cases for which the same explanation structure can be used. This definition provides the basis for a rule covering all of the cases in the class. Explanation: The “explanation” can be a logical proof, but more generally it can be any reasoning or problem-solving process whose steps are well defined. The key is to be able to identify the necessary conditions for those same steps to apply to another case. Norvig I 782 EBL: 1. Given an example, construct a proof that the goal predicate applies to the example using the available background knowledge. Norvig I 783 2. In parallel, construct a generalized proof tree for the variabilized goal using the same inference steps as in the original proof. 3. Construct a new rule whose left-hand side consists of the leaves of the proof tree and whose right-hand side is the variabilized goal (after applying the necessary bindings from the generalized proof). 4. Drop any conditions from the left-hand side that are true regardless of the values of the variables in the goal. Norvig I 794 Inverse resolution: Inverse resolution is based on the observation that if the example Classifications follow from Background ∧ Hypothesis ∧ Descriptions, then one must be able to prove this fact by resolution (because resolution is complete). If we can “run the proof backward,” then we can find a Hypothesis such that the proof goes through. Norvig I 795 Inverse entailment: The idea is to change the entailment constraint Background ∧ Hypothesis ∧ Descriptions |= Classifications to the logically equivalent form Background ∧ Descriptions ∧ ¬Classifications |= ¬Hypothesis. Norvig I 796 An inverse resolution procedure that inverts a complete resolution strategy is, in principle, a complete algorithm for learning first-order theories. That is, if some unknown Hypothesis generates a set of examples, then an inverse resolution procedure can generate Hypothesis from the examples. This observation suggests an interesting possibility: Suppose that the available examples include a variety of trajectories of falling bodies. Would an inverse resolution program be theoretically capable of inferring the law of gravity? The answer is clearly yes, because the law of gravity allows one to explain the examples, given suitable background mathematics. Norvig I 798 Literature: The current-best-hypothesis approach is an old idea in philosophy (Mill, 1843)(1). Early work in cognitive psychology also suggested that it is a natural form of concept learning in humans (Bruner et al., 1957)(2). In AI, the approach is most closely associated with the work of Patrick Winston, whose Ph.D. thesis (Winston, 1970)(3) addressed the problem of learning descriptions of complex objects. Version space: The version space method (Mitchell, 1977(4), 1982(5)) takes a different approach, maintaining the set of all consistent hypotheses and eliminating thosefound to be inconsistent with new examples. The approach was used in the Meta-DENDRAL Norvig I 799 expert system for chemistry (Buchanan and Mitchell, 1978)(6), and later in Mitchell’s (1983)(7) LEX system, which learns to solve calculus problems. A third influential thread was formed by the work of Michalski and colleagues on the AQ series of algorithms, which learned sets of logical rules (Michalski, 1969(8); Michalski et al., 1986(9)). EBL: EBL had its roots in the techniques used by the STRIPS planner (Fikes et al., 1972)(10). When a plan was constructed, a generalized version of it was saved in a plan library and used in later planning as a macro-operator. Similar ideas appeared in Anderson’s ACT* architecture, under the heading of knowledge compilation (Anderson, 1983)(11), and in the SOAR architecture, as chunking (Laird et al., 1986)(12). Schema acquisition (DeJong, 1981)(13), analytical generalization (Mitchell, 1982)(5), and constraint-based generalization (Minton, 1984)(14) were immediate precursors of the rapid growth of interest in EBL stimulated by the papers of Mitchell et al. (1986)(15) and DeJong and Mooney (1986)(16). Hirsh (1987) introduced the EBL algorithm described in the text, showing how it could be incorporated directly into a logic programming system. Van Harmelen and Bundy (1988)(18) explain EBL as a variant of the partial evaluation method used in program analysis systems (Jones et al., 1993)(19). VsEBL: Initial enthusiasm for EBL was tempered by Minton’s finding (1988)(20) that, without extensive extra work, EBL could easily slow down a program significantly. Formal probabilistic analysis of the expected payoff of EBL can be found in Greiner (1989)(21) and Subramanian and Feldman (1990)(22). An excellent survey of early work on EBL appears in Dietterich (1990)(23). Relevance: Relevance information in the form of functional dependencies was first developed in the database community, where it is used to structure large sets of attributes into manageable subsets. Functional dependencies were used for analogical reasoning by Carbonell and Collins (1973)(24) and rediscovered and given a full logical analysis by Davies and Russell (Davies, 1985(25); Davies and Russell, 1987(26)). Prior knowledge: Their role as prior knowledge in inductive learning was explored by Russell and Grosof (1987)(27). The equivalence of determinations to a restricted-vocabulary hypothesis space was proved in Russell (1988)(28). Learning: Learning algorithms for determinations and the improved performance obtained by RBDTL were first shown in the FOCUS algorithm, due to Almuallim and Dietterich (1991)(29). Tadepalli (1993)(30) describes a very ingenious algorithm for learning with determinations that shows large improvements in earning speed. Inverse deduction: The idea that inductive learning can be performed by inverse deduction can be traced to W. S. Jevons (1874)(31) (…). Computational investigations began with the remarkable Ph.D. thesis by Norvig I 800 Gordon Plotkin (1971)(32) at Edinburgh. Although Plotkin developed many of the theorems and methods that are in current use in ILP, he was discouraged by some undecidability results for certain subproblems in induction. MIS (Shapiro, 1981)(33) reintroduced the problem of learning logic programs, but was seen mainly as a contribution to the theory of automated debugging. Induction/rules: Work on rule induction, such as the ID3 (Quinlan, 1986)(34) and CN2 (Clark and Niblett, 1989)(35) systems, led to FOIL (Quinlan, 1990)(36), which for the first time allowed practical induction of relational rules. Relational Learning: The field of relational learning was reinvigorated by Muggleton and Buntine (1988)(37), whose CIGOL program incorporated a slightly incomplete version of inverse resolution and was capable of generating new predicates. The inverse resolution method also appears in (Russell, 1986)(38), with a simple algorithm given in a footnote. The next major system was GOLEM (Muggleton and Feng, 1990)(39), which uses a covering algorithm based on Plotkin’s concept of relative least general generalization. ITOU (Rouveirol and Puget, 1989)(40) and CLINT (De Raedt, 1992)(41) were other systems of that era. Natural language: More recently, PROGOL (Muggleton, 1995)(42) has taken a hybrid (top-down and bottom-up) approach to inverse entailment and has been applied to a number of practical problems, particularly in biology and natural language processing. Uncertainty: Muggleton (2000)(43) describes an extension of PROGOL to handle uncertainty in the form of stochastic logic programs. Inductive logic programming /ILP: A formal analysis of ILP methods appears in Muggleton (1991)(44), a large collection of papers in Muggleton (1992)(45), and a collection of techniques and applications in the book by Lavrauc and Duzeroski (1994)(46). Page and Srinivasan (2002)(47) give a more recent overview of the field’s history and challenges for the future. Early complexity results by Haussler (1989) suggested that learning first-order sentences was intractible. However, with better understanding of the importance of syntactic restrictions on clauses, positive results have been obtained even for clauses with recursion (Duzeroski et al., 1992)(48). Learnability results for ILP are surveyed by Kietz and Duzeroski (1994)(49) and Cohen and Page (1995)(50). Discovery systems/VsILP: Although ILP now seems to be the dominant approach to constructive induction, it has not been the only approach taken. So-called discovery systems aim to model the process of scientific discovery of new concepts, usually by a direct search in the space of concept definitions. Doug Lenat’s Automated Mathematician, or AM (Davis and Lenat, 1982)(51), used discovery heuristics expressed as expert system rules to guide its search for concepts and conjectures in elementary number theory. Unlike most systems designed for mathematical reasoning, AM lacked a concept of proof and could only make conjectures. It rediscovered Goldbach’s conjecture and the Unique Prime Factorization theorem. AM’s architecture was generalized in the EURISKO system (Lenat, 1983)(52) by adding a mechanism capable of rewriting the system’s own discovery heuristics. EURISKO was applied in a number of areas other than mathematical discovery, although with less success than AM. The methodology of AM and EURISKO has been controversial (Ritchie and Hanna, 1984; Lenat and Brown, 1984). 1. Mill, J. S. (1843). A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence, and Methods of Scientific Investigation. J.W. Parker, London. 2. Bruner, J. S., Goodnow, J. J., and Austin, G. A. (1957). A Study of Thinking. Wiley. 3. Winston, P. H. (1970). Learning structural descriptions from examples. Technical report MAC-TR-76, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology. 4. Mitchell, T.M. (1977). Version spaces: A candidate elimination approach to rule learning. In IJCAI-77, pp. 305–310. 5. Mitchell, T. M. (1982). Generalization as search. AIJ, 18(2), 203–226. 6. Buchanan, B. G.,Mitchell, T.M., Smith, R. G., and Johnson, C. R. (1978). Models of learning systems. In Encyclopedia of Computer Science and Technology, Vol. 11. Dekker. 7. Mitchell, T. M., Utgoff, P. E., and Banerji, R. (1983). Learning by experimentation: Acquiring and refining problem-solving heuristics. In Michalski, R. S., Carbonell, J. G., and Mitchell, T. M. (Eds.), Machine Learning: An Artificial Intelligence Approach, pp. 163–190. Morgan Kaufmann. 8. Michalski, R. S. (1969). On the quasi-minimal solution of the general covering problem. In Proc. First International Symposium on Information Processing, pp. 125–128. 9. Michalski, R. S.,Mozetic, I., Hong, J., and Lavrauc, N. (1986). The multi-purpose incremental learning system AQ15 and its testing application to three medical domains. In AAAI-86, pp. 1041–1045. 10. Fikes, R. E., Hart, P. E., and Nilsson, N. J. (1972). Learning and executing generalized robot plans. AIJ, 3(4), 251–288. 11. Anderson, J. R. (1983). The Architecture of Cognition. Harvard University Press. 12. Laird, J., Rosenbloom, P. S., and Newell, A. (1986). Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1, 11–46. 13. DeJong, G. (1981). Generalizations based on explanations. In IJCAI-81, pp. 67–69. 14. Minton, S. (1984). Constraint-based generalization: Learning game-playing plans from single examples. In AAAI-84, pp. 251–254. 15. Mitchell, T. M., Keller, R., and Kedar-Cabelli, S. (1986). Explanation-based generalization: A unifying view. Machine Learning, 1, 47–80. 16. DeJong, G. and Mooney, R. (1986). Explanation-based learning: An alternative view. Machine Learning, 1, 145–176. 17. Hirsh, H. (1987). Explanation-based generalization in a logic programming environment. In IJCAI-87. 18. van Harmelen, F. and Bundy, A. (1988). Explanation-based generalisation = partial evaluation. AIJ, 36(3), 401–412. 19. Jones, N. D., Gomard, C. K., and Sestoft, P. (1993). Partial Evaluation and Automatic Program Generation. Prentice-Hall. 20. Minton, S. (1988). Quantitative results concerning the utility of explanation-based learning. In AAAI-88, pp. 564–569. 21. Greiner, R. (1989). Towards a formal analysis of EBL. In ICML-89, pp. 450–453. 22. Subramanian, D. and Feldman, R. (1990). The utility of EBL in recursive domain theories. In AAAI-90, Vol. 2, pp. 942–949. 23. Dietterich, T. (1990). Machine learning. Annual Review of Computer Science, 4, 255–306. 24. Carbonell, J. R. and Collins, A. M. (1973). Natural semantics in artificial intelligence. In IJCAI-73, pp. 344–351. 25. Davies, T. R. (1985). Analogy. Informal note INCSLI- 85-4, Center for the Study of Language and Information (CSLI). 26. Davies, T. R. and Russell, S. J. (1987). A logical approach to reasoning by analogy. In IJCAI-87, Vol. 1, pp. 264–270. 27. Russell, S. J. and Grosof, B. (1987). A declarative approach to bias in concept learning. In AAAI-87. 28. Russell, S. J. (1988). Tree-structured bias. In AAAI-88, Vol. 2, pp. 641–645. 29. Almuallim, H. and Dietterich, T. (1991). Learning with many irrelevant features. In AAAI-91, Vol. 2, pp. 547–552. 30. Tadepalli, P. (1993). Learning from queries and examples with tree-structured bias. In ICML-93, pp. 322–329. 31. Jevons, W. S. (1874). The Principles of Science. Routledge/Thoemmes Press, London. 32. Plotkin, G. (1971). Automatic Methods of Inductive Inference. Ph.D. thesis, Edinburgh University. 33. Shapiro, E. (1981). An algorithm that infers theories from facts. In IJCAI-81, p. 1064. 34. Quinlan, J. R. (1986). Induction of decision trees. Machine Learning, 1, 81–106. 35. Clark, P. and Niblett, T. (1989). The CN2 induction algorithm. Machine Learning, 3, 261–283. 36. Quinlan, J. R. (1990). Learning logical definitions from relations. Machine Learning, 5(3), 239–266. 37. Muggleton, S. H. and Buntine, W. (1988). Machine invention of first-order predicates by inverting resolution. In ICML-88, pp. 339–352. 38. Russell, S. J. (1986). A quantitative analysis of analogy by similarity. In AAAI-86, pp. 284–288. 39. Muggleton, S. H. and Feng, C. (1990). Efficient induction of logic programs. In Proc. Workshop on Algorithmic Learning Theory, pp. 368–381. 40. Rouveirol, C. and Puget, J.-F. (1989). A simple and general solution for inverting resolution. In Proc. European Working Session on Learning, pp. 201–210. 41. De Raedt, L. (1992). Interactive Theory Revision: An Inductive Logic Programming Approach. Academic Press. 42. Muggleton, S. H. (1995). Inverse entailment and Progol. New Generation Computing, 13(3-4), 245- 286. 43. Muggleton, S. H. (2000). Learning stochastic logic programs. Proc. AAAI 2000 Workshop on Learning Statistical Models from Relational Data. 44. Muggleton, S. H. (1991). Inductive logic programming. New Generation Computing, 8, 295–318. 45. Muggleton, S. H. (1992). Inductive Logic Programming. Academic Press. 46. Lavrauc, N. and Duzeroski, S. (1994). Inductive Logic Programming: Techniques and Applications. Ellis Horwood 47. Page, C. D. and Srinivasan, A. (2002). ILP: A short look back and a longer look forward. Submitted to Journal of Machine Learning Research. 48. Duzeroski, S., Muggleton, S. H., and Russell, S. J. (1992). PAC-learnability of determinate logic programs. In COLT-92, pp. 128–135. 49. Kietz, J.-U. and Duzeroski, S. (1994). Inductive logic programming and learnability. SIGART Bulletin, 5(1), 22–32. 50. Cohen, W. W. and Page, C. D. (1995). Learnability in inductive logic programming: Methods and results. New Generation Computing, 13(3–4), 369-409. 51. Davis, R. and Lenat, D. B. (1982). Knowledge-Based Systems in Artificial Intelligence. McGraw- Hill. 52. Lenat, D. B. (1983). EURISKO: A program that learns new heuristics and domain concepts: The nature of heuristics, III: Program design and results. AIJ, 21(1–2), 61–98. 53. Ritchie, G. D. and Hanna, F. K. (1984). AM: A case study in AI methodology. AIJ, 23(3), 249–268. 54. Lenat, D. B. and Brown, J. S. (1984). Why AM and EURISKO appear to work. AIJ, 23(3), 269–294. |
Norvig I Peter Norvig Stuart J. Russell Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010 |
Rationalism | Locke | Arndt II 188 VsRationalism/Arndt: confusion between the simple and the general! Obscured the debate about the analyticity criterion - made falsely seem possible the derivation of properties from essential concepts. LockeVsRationalism: he avoids this by distinguishing: ascent (bottom-up): in the formation of ideas by abstraction from particular to general (from the individual to species and genus) - descent (top-down) reducing the composite (complex ideas) to the simple. >Idea/Locke, >Mind/Locke, >Order/Locke. |
Loc III J. Locke An Essay Concerning Human Understanding Loc II H.W. Arndt "Locke" In Grundprobleme der großen Philosophen - Neuzeit I, J. Speck (Hg) Göttingen 1997 |
Reading Acquisition | Psychological Theories | Upton I 99 Reading acquisition/psychological theories/Upton: One of the other major advantages of the written word is the way it enhances our cognitive functioning. Writing things down can be a great memory aid(…). In this way, writing is able to enhance our cognitive processes (Menary, 2007)(1). By learning to read and write, the child is also able to become an active participant in the socio-cultural world of which he or she is a member (Nelson, 1996)(2). Reading is not automatic ((s) other than listening). A. Phonics approach: In order to learn to read, the child must develop a conscious awareness that the letters on the page represent the sounds of the spoken word. This happens through either a bottom-up or top-down process. In a bottom-up process we learn to spell out each phoneme and build up the word. To read the word ‘cat’, the word must first be split into its basic phonological elements.Once the word is in its phonological form, it can be identified and understood. B. Whole language approach: In a top-down process the whole word is recognised by its overall visual appearance. There is much debate Upton I 100 about which approach is best, but the evidence suggests that children use and benefit from both strategies (Siegler, 1986(3); Vacca et al., 2006(4)). Once the word is identified, higher-level cognitive functions such as intelligence and vocabulary are applied to understand the word’s meaning (…). Many children may also know the letters of the alphabet when they first start school. These children tend to be more successful in learning to read than those who have not learned the alphabet. (Adams, 1990)(5). Children with a greater knowledge of nursery rhymes show a much better phonemic awareness (Maclean et al., 1987)(6). It seems that rhymes allow children to discover phonemes. Upton i 101 Writing: Writing and reading are closely related and, some would say, inseparable. However, in addition to the cognitive and linguistic skills that children need for reading, in order to write, children also need to have developed fine motor skills. Studies of children with specific learning difficulties have highlighted the joint occurrence of motor and language difficulties (Viholainen et al., 2002)(7). Indeed, the observed prevalence of motor problems in children with developmental language problems has been estimated to be somewhere between 60 and 90 per cent (Viholainen et al., 2002)(7). One possible explanation for this co-morbidity is that motor and language problems share a common underlying neuro-cognitive system. >Brain/Deacon, >Learning/Deacon, >Language/Deacon, >Reading acquisition/Neuroimaging. 1. Menary, R (2007) Cognitive Integration: Mind and cognition unbounded. Basingstoke: Palgrave Macmillan. 2. Nelson, K (1996) Language in Cognitive Development: The emergence of the mediated mind. New York: Cambridge University Press. 3. Siegler, RS (1986) Children’s Thinking. Englewood Cliffs, NJ: Prentice-Hall. 4. Vacca, JL, Vacca, RT, Gove, MK, Burkey, RC and Lenhart, LA (2006) Reading and Learning to Read (6th edn). Boston, MA: Allyn and Bacon. 5. Adams, MJ (1990) Beginning to Read: Thinking and learning about print. Cambridge, MA: MIT Press. 6. Maclean, M, Bryant, P, and Bradley, L (1987) Rhymes, nursery rhymes, and reading in early childhood. Merrill-Palmer Quarterly, 33: 255–81. 7. Viholainen, H, Ahonen, T, Cantell, M, Lyytinen, P and Lyytinen, H (2002) Development of early motor skills and language in children at risk for familial dyslexia. Developmental Medicine and Child Neurology, 44: 761–9. |
Upton I Penney Upton Developmental Psychology 2011 |
Sortals | Tugendhat | I 453 Sortal/Aristotle/Tugendhat: E.g. "chair" distinguished through function -> "bottom-up": we ask how singular term must function - sortal: allows to decide what belongs to it and what does not - no temporal, only spatial limits - (>continuant). Life phases of an object are not regarded as parts. >Parts, >Part-of-relation, >Temporal identity. I 457f Sortal/Tugendhat: allows new type of temporal-spatial identification - we should not presuppose perceptual object - then identification by distinguishing space-time locations. >Specification. I 460 Sortal: Not just imagination. Sortal predicates: presuppose a specific configuration of spatial or temporal extended - e.g. "the same cat". Conversely: sortal predicates are only explainable through space locations together with equal signs. >Equal sign. |
Tu I E. Tugendhat Vorlesungen zur Einführung in die Sprachanalytische Philosophie Frankfurt 1976 Tu II E. Tugendhat Philosophische Aufsätze Frankfurt 1992 |
Supervenience | Searle | I 146 Supervenience/Searle: the concept of supervenience stems from ethics: moral property sits supposedly opposite of natural properties (Moore). There must be a feature, why something is better, but not causation but constitution by this feature. >Causation, >Constitution. Supervenience: a) mind completely dependent on physique - b) physical equality guarantees mental equality, but not vice versa. Mind-Body Problem/Searle: only causality is important: micro (physique) causes macro (mind) (from bottom to top, bottom-up). >Mind/Body-problem. SearleVsSupervenience: supervenience is thereby superfluous. Strength is causally supervenient in contrast to the given molecular structure, but thereby not epiphenomenal. >Epiphenomenalism. Graeser I 160 Supervenience/Searle/Graeser: supervenience corresponds with sufficient but not with necessary conditions. >Sufficiency. Davidson: sets: a predicate P is supervenient in relation to a set of predicates S iff P differentiates no entities, which cannot be distinguished by S as well. |
Searle I John R. Searle The Rediscovery of the Mind, Massachusetts Institute of Technology 1992 German Edition: Die Wiederentdeckung des Geistes Frankfurt 1996 Searle II John R. Searle Intentionality. An essay in the philosophy of mind, Cambridge/MA 1983 German Edition: Intentionalität Frankfurt 1991 Searle III John R. Searle The Construction of Social Reality, New York 1995 German Edition: Die Konstruktion der gesellschaftlichen Wirklichkeit Hamburg 1997 Searle IV John R. Searle Expression and Meaning. Studies in the Theory of Speech Acts, Cambridge/MA 1979 German Edition: Ausdruck und Bedeutung Frankfurt 1982 Searle V John R. Searle Speech Acts, Cambridge/MA 1969 German Edition: Sprechakte Frankfurt 1983 Searle VII John R. Searle Behauptungen und Abweichungen In Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995 Searle VIII John R. Searle Chomskys Revolution in der Linguistik In Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995 Searle IX John R. Searle "Animal Minds", in: Midwest Studies in Philosophy 19 (1994) pp. 206-219 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 Grae I A. Graeser Positionen der Gegenwartsphilosophie. München 2002 |
Terminology | Brandom | I 327 RDRD/Brandom: reliable distinctive reaction disposition: basis for non-inferential (direct) authority of observations. I 486f Designation/Brandom: that there is a truth value at all. I 509 Free-standing content/multi-value I 530 Definition SMSIC/Brandom: simple material substitution-inferential definition - connects the expression "the inventor" with another one - additional information which makes the attribution of the true identity of "Franklin was an inventor, but also Postmaster General, and printer, and spoke French ..." to a single object possible - but not within propositional attitudes. I 531 Content of an expression: is determined by the set of SMSICs (simple material substitution-inferential definitions) that link it with other expressions. I 532 SMSIC symmetrical for singular term. I 487 Multi-valued logic/Brandom: Definition designated: the fact that a statement has any truth value at all. Designation indicates what truth is designated: requires a definition on the assertion. Definition Multi-valued: embedded content - ((s) a particular one of several possible truth values). Interpretation: assigns two types of value: a) whether designated, b) which multi- value. Standard situation: it is defined which multi-values are designated. Designation value: everything that plays a role for pragmatic significance of free-standing sentences. bottom-up: Designation > formal validity Basic principle: the substitution never changes with the same multi-value designation. I 488 Multi-values = equivalence classes from logically derivable sentences - Designation = logical validity. --- II 178 Status/Brandom: its transmission means: a particular status of the premise ensures that it is also attributed to the conclusion - this applies to definition-preserving inferences: Deduction - but not for Definition authority-preserving inferences: Induction. |
Bra I R. Brandom Making it exlicit. Reasoning, Representing, and Discursive Commitment, Cambridge/MA 1994 German Edition: Expressive Vernunft Frankfurt 2000 Bra II R. Brandom Articulating reasons. An Introduction to Inferentialism, Cambridge/MA 2001 German Edition: Begründen und Begreifen Frankfurt 2001 |
Disputed term/author/ism | Author |
Entry |
Reference |
---|---|---|---|
Intention | Bennett, J. | I 162 Intentions / Bennett thesis: can non-verbally ("from below" bottom-up) are described. I 186 Bennett s thesis: there is no significant difference between the intention that the mechanism M plays a role and the relying on that mechanism. |
|
Reduction | Churchland, P. | Metzinger II 464 Reductionism/Pat Churchland: Thesis: I am a reductionist. This does not mean, however, that a pure "bottom-up" strategy should be pursued. Pauen I / V 92 Eliminative Materialism/Pauen: Everyday psychology is responsible for our beliefs about the existence of mental states. Pauen I / V 93 Camp: this was developed by Feyerabend (1963) and Rorty (1965, 1970), as well as Paul and Patricia Churchland following Quine and Sellars. Thesis: mental states are merely postulates of everyday psychology. We will give them up when there will be a better theory. (>Reductionism). Pauen I / V 100 Churchland/Pauen: their reductionism is an ontological, not only semantic thesis. |
Pauen I M. Pauen Grundprobleme der Philosophie des Geistes Frankfurt 2001 |