Disputed term/author/ism | Author |
Entry |
Reference |
---|---|---|---|
Artificial Intelligence | Chalmers | I 185 Artificial Intelligence/Chalmers: Suppose we had an artificial system that rationally reflects what it perceives. Would this system have a concept of consciousness? It would certainly have a concept of the self, it could differ from the rest of the world, and have a more direct access to its own cognitive contents than to that of others. So it would have a certain kind of self-awareness. This system will not say about itself, that it would have no idea how it is to see a red triangle. Nor does it need access to its elements on a deeper level (Hofstadter 1979 1, Winograd 1972 2). N.B.: such a system would have a similar attitude to its inner life as we do to ours. Cf. >Artificial consciousness, >Self-consciousness, >Self-knowledge, >Self-identification, >Knowing how. I 186 Behavioral explanation/Chalmers: to explain the behavior of such systems, we never need to attribute consciousness. Perhaps such systems have consciousness, or not, but the explanation of their behavior is independent of this. >Behavior, >Explanation. I 313 Artificial Intelligence/VsArtificial Intelligence/Chalmers: DreyfusVsArtificial Intelligence: (Dreyfus 1972 7): Machines cannot achieve the flexible and creative behavior of humans. LucasVsArtificial Intelligence/PenroseVsArtificial Intelligence/Chalmers: (Lucas 1961 3, Penrose, 1989 4): Computers can never reach the mathematical understanding of humans because they are limited by Goedel's Theorem in a way in which humans are not. Chalmers: these are external objections. The internal objections are more interesting: VsArtificial intelligence: internal argument: conscious machines cannot develop a mind. >Mind/Chalmers. SearleVsArtificial Intelligence: > Chinese Room Argument. (Searle 1980 5). According to that, a computer is at best a simulation of consciousness, a zombie. >Chinese Room, >Zombies, >Intentionality/Searle. Artificial Intelligence/ChalmersVsSearle/ChalmersVsPenrose/ChalmersVsDreyfus: it is not obvious that certain physical structures in the computer lead to consciousness, the same applies to the structures in the brain. >Consciousness/Chalmers. I 314 Definition Strong Artificial Intelligence/Searle/Chalmers: Thesis: There is a non-empty class of computations so that the implementation of each operation from this class is sufficient for a mind and especially for conscious experiences. This is only true with natural necessity, because it is logically possible that any compuation can do without consciousness, but this also applies to brains. >Strong Artificial Intelligence. I 315 Implementation/Chalmers: this term is needed as a bridge for the connection between abstract computations and concrete physical systems in the world. We also sometimes say that our brain implements calculations. Cf. >Thinking/World, >World, >Reality, >Computation, >Computer Model. Implementation/Searle: (Searle 1990b 6): Thesis is an observational-relativistic term. If you want, you can consider every system as implementing, for example: a wall. ChalmersVsSearle: one has to specify the implementation, then this problem is avoided. I 318 For example, a combinatorial state machine has quite different implementation conditions than a finite state machine. The causal interaction between the elements is differently fine-grained. >Fine-grained/coarse-grained. In addition, combinatorial automats can reflect various other automats, like... I 319 ...Turing-machines and cellular automats, as opposed to finite or infinite state automats. >Turing-machine, >Vending machine/Dennett. ChalmersVsSearle: each system implements one or the other computation. Only not every type (e.g., a combinational state machine) is implemented by each system. Observational relativity remains, but it does not threaten the possibility of artificial intelligence. I 320 This does not say much about the nature of the causal relations. >Observation, >Observer relativity. 1. D. R. Hofstadter Gödel, Escher Bach, New York 1979 2. T. Winograd, Understanding Natural Language, New York 1972 3. J. R. Lucas, Minds, machines and Gödel, Philosophy 36, 1961, p. 112-27. 4. R. Penrose, The Emperor's New Mind, Oxford 1989 5. J. R. Searle, Minds, brains and programs. Behavioral and Brain Sciences 3, 1980: pp. 417 -24 6. J. R. Searle, Is the brain an digital computer? Proceedings and Adresses of the American Philosophical association, 1990, 64: pp. 21-37 7. H. Dreyfus, What Computers Can't Do. New York 1972. |
Cha I D. Chalmers The Conscious Mind Oxford New York 1996 Cha II D. Chalmers Constructing the World Oxford 2014 |
Artificial Intelligence | Pentland | Brockman I 200 Artificial intelligence/Pentland: On the horizon is a vision of how we can make humanity more intelligent by building a human AI. It’s a vision composed of two threads. One is data that we can all trust- data that have been vetted by a broad community, data where the algorithms are known and monitored, much like the census data we all automatically rely on as at least approximately correct. The other is a fair, data-driven assessment of public norms, policy, and government, based on trusted data about current conditions. >Cybernetics/Pentland, >Ecosystems/Pentland, >Decision-making Processes/Pentland, >Data/Pentland. Brockman I 204 One thing people often fail to mention is that all the worries about AI are the same as the worries about today’s government. For most parts of the government - the justice system, etc. - there’s no reliable data about what they’re doing and in what situation. VsArtificial intelligence/Pentland: Current AI is doing descriptive statistics in a way that’s not science and would be almost impossible to make into science. To build robust systems, we need to know the science behind data. Solution/Pentland: The systems I view as next-generation Als result from this science- based approach: If you’re going to create an AI to deal with something physical, then you should build the laws of physics into it as your descriptive functions, in place of those stupid little neurons. >Ecosystem/Pentland. ing algorithms. When you replace the stupid neurons with ones that capture the basics of human behavior, then you can identify trends with very little data, and you can deal with huge levels of noise. The fact that humans have a “commonsense” understanding that they bring to most Brockman I 205 problems suggests what I call the human strategy: Human society is a network just like the neural nets trained for deep learning, but the “neurons” in human society are a lot smarter. Pentland, A. “The Human strategy” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Artificial Intelligence | Searle | I 60 Artificial Intelligence/AI/thesis: the mind acts to the brain like the program to hardware. Different material structures can be mentally equivalent if they are different hardware versions of the same computer program. The brain is then not important for the mind. This was one of the most exciting developments in the two thousand year history of materialism. The science of artificial intelligence offered an answer to the question: different material structures can be mentally equivalent if they are different hardware executions of the same computer program. AI thesis: the mind behaves to the brain as the program behaves to the hardware. One could be a materialist through and through and at the same time - like Descartes - be of the opinion that the brain is not really important for the mind. In this way one can indicate and understand the typical spiritual aspects of the mind without knowing how the brain functions. Even as a materialist one does not need to explore the brain to explore the mind. >Materialism as a concept, >positions of materialism. I 61 SearleVs: see > href="https://philosophy-science-humanities-controversies.com/listview-list.php?concept=Chinese+Room">Chinese Room. VsArtificial Intelligence: objection of common sense: the computer model of the mind ignores decisive factors, such as consciousness and intentionality. I 227 Def Strong artificial intelligence/Searle: the mind is like a program. >Strong Artificial Intelligence. Def Weak artificial intelligence: brain processes can be simulated with computers. >Artificial Intelligence. Def Cognitivism: the brain is like a computer. >Computation, >Information processing/Psychology. I 228 Artificial Intelligence: semantics is completely mirrored in syntax (proof theory). SearleVs: It is not to decide empirically what is: program, algorithm, computer - that is description dependent. Perler I 145 "Strong artificial intelligence"/Searle: expression of traditional dualism: that the specific neurobiology of the brain is not important. >Cognition/Searle, >SearleVsAI. |
Searle I John R. Searle The Rediscovery of the Mind, Massachusetts Institute of Technology 1992 German Edition: Die Wiederentdeckung des Geistes Frankfurt 1996 Searle II John R. Searle Intentionality. An essay in the philosophy of mind, Cambridge/MA 1983 German Edition: Intentionalität Frankfurt 1991 Searle III John R. Searle The Construction of Social Reality, New York 1995 German Edition: Die Konstruktion der gesellschaftlichen Wirklichkeit Hamburg 1997 Searle IV John R. Searle Expression and Meaning. Studies in the Theory of Speech Acts, Cambridge/MA 1979 German Edition: Ausdruck und Bedeutung Frankfurt 1982 Searle V John R. Searle Speech Acts, Cambridge/MA 1969 German Edition: Sprechakte Frankfurt 1983 Searle VII John R. Searle Behauptungen und Abweichungen In Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995 Searle VIII John R. Searle Chomskys Revolution in der Linguistik In Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995 Searle IX John R. Searle "Animal Minds", in: Midwest Studies in Philosophy 19 (1994) pp. 206-219 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 Perler I Dominik Perler Markus Wild Der Geist der Tiere Frankfurt 2005 |
Artificial Intelligence | Wittgenstein | Metzinger II 720 AI/WittgensteinVsAI/WittgensteinVsArtificial Intelligence/Birnbacher: for Wittgenstein artificial intelligence is logically impossible, because we ascribe the term only to humans. - (Philosophical Investigations/PI § 360) - Birnbacher: the truth value could still be met - not only the assertibility conditions. >Truth values, >Assertibility conditions. |
W II L. Wittgenstein Wittgenstein’s Lectures 1930-32, from the notes of John King and Desmond Lee, Oxford 1980 German Edition: Vorlesungen 1930-35 Frankfurt 1989 W III L. Wittgenstein The Blue and Brown Books (BB), Oxford 1958 German Edition: Das Blaue Buch - Eine Philosophische Betrachtung Frankfurt 1984 W IV L. Wittgenstein Tractatus Logico-Philosophicus (TLP), 1922, C.K. Ogden (trans.), London: Routledge & Kegan Paul. Originally published as “Logisch-Philosophische Abhandlung”, in Annalen der Naturphilosophische, XIV (3/4), 1921. German Edition: Tractatus logico-philosophicus Frankfurt/M 1960 Metz I Th. Metzinger (Hrsg.) Bewusstsein Paderborn 1996 |
Simulation | Chalmers | I 327 Simulation/Artificial Intelligence/Consciousness/Searle/Chalmers: SearleVsArtificial Intelligence (Searle 1980)(1), HarnadVsArtificial Intelligence (Harnad 1989)(2): Thesis: the simulation of a phenomenon is not the same as a replica of the phenomenon. E.g. the digital simulation of the digestion process does not digest any food. >Artificial Intelligence, >Strong Artificial Intelligence, >Artificial Consciousness, >Simulation. I 328 Simulation/Chalmers: while some simulations are not real doublings, e.g. the simulation of heat, others are real doublings: e.g. the simulation of a system with a causal loop is a system with a causal loop. Definition Simulation/Chalmers: a simulation of X is an X if the property of being X is an organizational invariance. That is, if the system depends only on the functional organization of the underlying system and nothing else. The remaining properties are not retained. For example, the property of being a hurricane is not organisationally invariant because it is partly dependent on non-organizational properties such as speed, shape, etc. Likewise, heat or digestion depend on aspects of the physical nature and are not entirely organisational. Consciousness/Simulation/Chalmers: phenomenal properties are different: they are organisationally invariant; i.e. in case of an identical physical structure, two systems will have the same phenomenal experiences. Thus, consciousness becomes one of the other different property. >Consciousness, >Consciousness/Chalmers. 1. J. R. Searle, Minds, brains and programs. Behavioral and Brain Sciences 3, 1980: pp. 417 -24 2. S. Harnad, Minds, machines, and Searle. Journal of Experimental and Theoretical Artificial Intelligence 1, 1989: pp.5-25. |
Cha I D. Chalmers The Conscious Mind Oxford New York 1996 Cha II D. Chalmers Constructing the World Oxford 2014 |
Strong Artificial Intelligence | Chalmers | I 314 Definition Strong Artificial Intelligence/Searle/Chalmers: Thesis: There is a non-empty class of computations so that the implementation of each operation from this class is sufficient for a mind and especially for conscious experiences. This is only true with natural necessity, because it is logically possible that any computation can do without consciousness, but this also applies to brains. >Consciousness/Chalmers, >Consciousness, >Mind, >Experience, >Computation, >Information Processing, >Brain. I 320 A computational description of a system provides a formal description of the causal organization of this system. >Artificial intelligence, >Computer model, cf. >Neural networks. I 321 Invariance principle: every system with conscious experiences, which has the same functional organization as another system with conscious experiences, will have qualitatively identical conscious experiences. There may be corresponding causal relations between electronic components like there is between neurons in the brain. Fading Qualia/dancing Qualia: we can use these kinds of qualia for arguments for the strong artificial intelligence. >Qualia/Chalmers. I 322 If there were two organizationally identical systems, one of which had conscious experiences, and the other not, one could construct a system with fading or dancing qualia that lay between these two systems. That would be implausible. If fading and dancing qualia are excluded, the thesis of the Strong Artificial Intelligence applies. (> Qualia/Chalmers). I 329 VsArtificial Intelligence/Goedel/Chalmers: in a consistent formal system which is expressive enough for a certain kind of arithmetic, one can construct a sentence which is not provable in this system. Contrary to the machine, the human being can see that the sentence is true. >Provability. I 330 Therefore the human has an ability which the formal system does not have. ChalmersVsVs: there is no reason to believe that the human is aware of the truth of the sentence. At best, we can say that if the system is consistent, the sentence is true. We cannot always determine the consistency of complex systems. >Consistency. PenroseVsArtificial Intelligence/Chalmers: (Penrose 1994)(1) brings an argument on a lower level: it may be that not all physical processes are computable. >Calculabilty. ChalmersVsVs: But this is based on the above mentioned Goedel argument. Nothing in physical theory itself supports it. VsArtificial Intelligence/VsSimulation/Chalmers: what if consciousness processes are essentially continuous, but our simulations are discrete? >Simulation. I 331 ChalmersVsVs: there are reasons to assume that absolute continuity is not essential for our cognitive competence. However, it might be that a system with unlimited precision (achieved by continuity) has cognitive abilities that a discrete system does not achieve. Cf. >Analog/digital. 1. R. Penrose, Shadows of the Mind, Oxford 1994 |
Cha I D. Chalmers The Conscious Mind Oxford New York 1996 Cha II D. Chalmers Constructing the World Oxford 2014 |
Disputed term/author/ism | Author Vs Author |
Entry |
Reference |
---|---|---|---|
Artificial Intelligence | Chomsky Vs Artificial Intelligence | Dennett I 540 Language / ChomskyVsArtificial Intelligence: the child shall later only switch whether it is learning Chinese or English, but it is not a "general problem solver". Even "slow" children "learn" jspeak well! They do not "learn" it, just as birds do not learn their feathers. I 541 Dennett per Chomsky. But if he s right, the phenomena of language are much more difficult to explore. |
Chomsky I Noam Chomsky "Linguistics and Philosophy", in: Language and Philosophy, (Ed) Sidney Hook New York 1969 pp. 51-94 In Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995 Chomsky II Noam Chomsky "Some empirical assumptions in modern philosophy of language" in: Philosophy, Science, and Method, Essays in Honor of E. Nagel (Eds. S. Morgenbesser, P. Suppes and M- White) New York 1969, pp. 260-285 In Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995 Chomsky IV N. Chomsky Aspects of the Theory of Syntax, Cambridge/MA 1965 German Edition: Aspekte der Syntaxtheorie Frankfurt 1978 Chomsky V N. Chomsky Language and Mind Cambridge 2006 Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 Dennett II D. Dennett Kinds of Minds, New York 1996 German Edition: Spielarten des Geistes Gütersloh 1999 Dennett III Daniel Dennett "COG: Steps towards consciousness in robots" In Bewusstein, Thomas Metzinger Paderborn/München/Wien/Zürich 1996 Dennett IV Daniel Dennett "Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 |
Artificial Intelligence | Hofstadter Vs Artificial Intelligence | II 701 HofstadterVsBarr: Confusion of the levels: "Cognition as an arithmetic process": even if the neurons cope with humming in an analogous way, this does not mean that the epiphenomena themselves are also arithmetic. Example : If taxis stop at red, this does not mean that traffic jams stop at red. (> Distribution/ >Properties): You should not confuse the properties of objects with the properties of collections of these objects. On one level something can be a calculation, on another it is not! (>Consciousness, State of Mind, State of Brain). II 704 HofstadterVsArtificial Intelligence: many representatives of "information processing" neglect the lower level. HofstadterVsArtificial Intelligence: whose main premise is that thoughts on their own level are themselves computational entities. (School of "Information Processing"). Active Symbols/Hofstadter: but there is no force at a higher level ("program") that orients itself downwards and pushes the symbols back and forth. Active symbols must incorporate what is necessary into their structures. In such a (hitherto hypothetical) program, the symbols themselves are active. |
Hofstadter I Douglas Hofstadter Gödel, Escher, Bach: An Eternal Golden Braid German Edition: Gödel, Escher, Bach - ein Endloses Geflochtenes Band Stuttgart 2017 Hofstadter II Douglas Hofstadter Metamagical Themas: Questing for the Essence of Mind and Pattern German Edition: Metamagicum München 1994 |
Artificial Intelligence | McGinn Vs Artificial Intelligence | I 97 Person / McGinn: humans belong to a particular emergent entities typus, because only when organisms meet certain conditions, the biological nature obviously manages the step to the category of person. (McGinnVsAI, McGinnVsArtificial Intelligence) This applies in phylogenetic as in ontogenetic terms. But then there must be something that triggers this ontological transition. So statements must be encoded in the genes for the production of a self from living cell tissues. It may be that our concept of a person is an indefinable analytical basic concept, but the things themselves need something like an inner natural structure and construction method. (E.g. house, physics, see above). |
McGinn I Colin McGinn Problems in Philosophy. The Limits of Inquiry, Cambridge/MA 1993 German Edition: Die Grenzen vernünftigen Fragens Stuttgart 1996 McGinn II C. McGinn The Mysteriouy Flame. Conscious Minds in a Material World, New York 1999 German Edition: Wie kommt der Geist in die Materie? München 2001 |
Artificial Intelligence | Penrose Vs Artificial Intelligence | Dennett I 617 PenroseVsAI/PenroseVsArtificial Intelligence: x can perfectly achieve a checkmate There is no algorithm for chess. Therefore, the good performance of x can not be explained by the fact that x can run an algorithm. Dennett I 619 Penrose: if you take any single algorithm, it can not be the method by which human mathematicians insure mathematical truths. Therefore they use no algorithms at all. I 621 DennettVsPenrose: thus is not shown that the human brain does not operate algorithmically. On the contrary, it makes clear how the cranes of culture, the community of mathematicians can exploit without recognizable boundaries in distributed algorithmic processes. |
Penr I R. Penrose The Road to Reality: A Complete Guide to the Laws of the Universe 2005 Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 Dennett IV Daniel Dennett "Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 |
Artificial Intelligence | Searle Vs Artificial Intelligence | Dennett I 555 SearleVsAI/SearleVsArtificial Intelligence: Computers have only "as-if-intentionality". Searle I 60 ff Here one of the most exciting developments in the two thousand year history of materialism took place. The science of artificial intelligence offered an answer to the question: different material structures can be mentally equivalent if they are different hardware versions of the same computer program. Artificial Intelligence Thesis: the mind is to the brain as the program to the hardware. You could be materialist through and through, yet - like Descartes - take the view that the brain is actually not important for the mind. So you can specify the typical mental aspects of the mind, and understand, without having to know how the brain works. Even as a materialist one does not need to explore the brain to explore the mind. I 61 So the new discipline was "cognitive science" was born. (SearleVs). VsArtificial Intelligence: Objection of common sense: the computer model of the mind ignores crucial things, such as consciousness and intentionality. Searle: the argument of the Chinese Room. (Chinese Room). This shows that a system could implement a program that this system does a perfect simulation of some human ability (such as the ability to understand Chinese) without this system, however, possessing the least understanding of Chinese. Imagine simply that someone who does not understand Chinese, is locked in a room in which a lot of Chinese symbols and a computer program to answer questions in Chinese are. I 62 The answer would not be different from those that a Chinese would give to these questions. The programmed computer has nothing that this system has not, he also does not understand Chinese. |
Searle I John R. Searle The Rediscovery of the Mind, Massachusetts Institute of Technology 1992 German Edition: Die Wiederentdeckung des Geistes Frankfurt 1996 Searle II John R. Searle Intentionality. An essay in the philosophy of mind, Cambridge/MA 1983 German Edition: Intentionalität Frankfurt 1991 Searle III John R. Searle The Construction of Social Reality, New York 1995 German Edition: Die Konstruktion der gesellschaftlichen Wirklichkeit Hamburg 1997 Searle IV John R. Searle Expression and Meaning. Studies in the Theory of Speech Acts, Cambridge/MA 1979 German Edition: Ausdruck und Bedeutung Frankfurt 1982 Searle V John R. Searle Speech Acts, Cambridge/MA 1969 German Edition: Sprechakte Frankfurt 1983 Searle VII John R. Searle Behauptungen und Abweichungen In Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995 Searle VIII John R. Searle Chomskys Revolution in der Linguistik In Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995 Searle IX John R. Searle "Animal Minds", in: Midwest Studies in Philosophy 19 (1994) pp. 206-219 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 Dennett IV Daniel Dennett "Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 |
Artificial Intelligence | Verschiedene Vs Artificial Intelligence | Dennett I 600 Some authors (e.g. Penrose) claim: Gödel has proved that no AI (artificial intelligence) is possible. DennettVs. Dennett I 604 J.R.Lucas, 1961: the decisive property should be "to represent a sentence as true". DennettVsLucas: but this encounters insurmountable interpretation problems. AI/"strong AI"/VsAI/Artificial Intelligence/Reference/Dennett: newer version of the critique VsStrong AI: the so-called problem of "symbol anchoring": for large AI programs it is okay to have data structures that pretend to refer to Chicago, milk or the "person I'm talking to", but such an imaginary reference is, according to this way of thinking, not the same as real reference. (Harnad, 1990) The internal "symbols" are not adequately "anchored" in the world. ((s) Instead of Chicago the program actually refers to e.g. "1oo1o111oo1...".) Problem: Name/object, mention/usage, anchoring: we too are constantly talking about things we do not even know. DennettVsVs: for our robot the problem is solved by meeting things in his "childhood". This leads at most to the parallel question of how far the reference of the word "Chicago" is anchored in the idiolect of the little child. Metz II 710 HaugelandVsArtificial Intelligence/VsArtificial Intelligence: artificial intelligence cannot know what real pain is. DennettVsHaugeland: this leads to the problem that we would have to assume that primitive organisms like flies or shells do not "know what real pain is". Meg I 270 KambartelVsNeurocybernetics/MalcolmVsNeurocybernetics/Tetens: many authors oriented towards Wittgenstein deny that human behaviour can be described as physiologically caused without gaps. Tetens IV 157 HungerlandVs"inductive view" of artificial intelligence: false generalization: that people generally believe what they say. |
Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 Dennett IV Daniel Dennett "Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 Tetens I H. Tetens Geist, Gehirn, Maschine Stuttgart 1994 W VII H. Tetens Tractatus - Ein Kommentar Stuttgart 2009 |
Penrose, R. | Dennett Vs Penrose, R. | I 614 Gödel/Toshiba Library/Dennett: "there is no single algorithm that can prove all truths of arithmetics." Dennett: But Gödel says nothing about all the other algorithms in the library! I 617/618 In particular, he says nothing about whether or not there are algorithms in the library for very the impressive performance "to call sentences true"! "Mathematical intuition", risky, heuristic algorithms, etc. DennettVsPenrose: he makes the very mistake of ignoring this group of possible algorithms and of focusing solely on those whose impossibility Gödel had demonstrated. Or about which Gödel says anything at all. Dennett: an algorithm can bring forth "mathematical insight", although it was not an "algorithm for mathematical insight"! I 615 PenroseVsArtificial Intelligence: x can perfectly achieve a checkmate - there is no algorithm for chess. Therefore, the good performance of x cannot be explained with the fact that x can run an algorithm. I 617 DennettVsPenrose: that’s wrong. The level of the algorithm is obviously the correct explanation level. X wins, because he has the better algorithm! I 619 Fallacy: If the mind is an algorithm, then it certainly cannot be seen or accessed by those whose mind it generates. E.g. There is no specific algorithm for distinguishing italics from bold print, but that does not mean that it cannot be distinguished. E.g. Suppose in the Library of Babel there is a single book which contains the alphabetic order of all New Yorker participants whose net worth is over $ 1 million. ("Megaphone Book"). Now we can prove multiple statements about this book: 1) The first letter on the first page is an A. 2) The first letter on the last page is not A. E.g. The fact that we cannot find any remains of the "mitochondrial Eve" does not mean that we cannot derive any statements about it. I 619 Penrose: if you take any single algorithm, it cannot be the method by which human mathematicians ensure mathematical truths. Accordingly, they do not use an algorithm at all. I 621 DennettVsPenrose: this does not show that a human brain does not operate algorithmically. On the contrary, it makes clear how the cranes of culture can exploit the community of mathematicians with no apparent limits in decentralized algorithmic processes. I 623 DennettVsPenrose: he says that the brain is not a Turing machine, but he does not say that the brain is not well represented by a Turing machine. I 625/626 Penrose: even a quantum computer would be a Turing machine which can only calculate functions that are proven to be computable. But Penrose also wishes to advance further than that: with "quantum gravity". I 628 DennettVsPenrose: why he thinks such a theory should not be computable? Because otherwise AI would be possible! That’s all. (Fallacy). DennettVsPenrose: The idea with microtubules is unconvincing: Suppose he was right, then even cockroaches would have a wayward spirit. Because they have microtubules like us. |
Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 Dennett IV Daniel Dennett "Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 |
Searle, J.R. | Dennett Vs Searle, J.R. | I 282 Intentionality/Darwin/Dennett: Darwin turns it all around: intentionality is secured from bottom to top. The first meaning was not a fully developed meaning, it certainly does not show all ’essential’ properties (whatever they may be). "Quasi-meaning", half semantics. I 555 SearleVsDennett: "as-if intentionality". Intentionality/DennettVsSearle: But you have to start somewhere (if you want to avoid metaphysics). The first step in the right direction is hardly recognizable as a step towards meaning. SearleVsArtificial Intelligence: Computers only possess "as-if intentionality". DennettVsSearle: then he has a problem. While AI says we are composed of machines, Darwinism says we are descended from machines!. I 557 You can hardly refuse the first if you agree with the second statement. How can something that has emerged from machines be anything other than a much, much more sophisticated machine?. Function/Searle: (according to Dennett): Only products that have been produced by a real human consciousness have a function ((s)> objet ambigu, Valéry). DennettVsSearle: I.e. the wings of the aircraft, but not the wings of the eagle serve for flying!. I 558 Intentionality/SearleVsDennett: cannot be achieved by the composition of machines or the ever better structure of algorithms. I 569 DennettVsSearle: this is the belief in sky hooks: the mind is not supposed to emerge, it is not created, but only (inexplicable) source of creation. Intention/DennettVsSearle: (E.g. Vending Machine): Those who select its new function perhaps do not even formulate any new intention. They only fall into the habit of relying on the new useful function. They do not perceive that they carry out an act of unconscious exaptation. Parallel: >Darwin: There is an unconscious selection of properties in pets. II 73 Searle: In the case of the artifact the creator must always be asked. Intrinsic (original) intentionality/DennettVsSearle: is metaphysical, an illusion. As if the "author would need to have a more original intention". Dennett: but there is no task for that. The hypothetical robot would be equally capable of transfering derived intentionality to other artifacts. Intentionality/DennettVsSearle: there certainly used to be coarser forms of intentionality (Searle contemptuously "mere as-if intentionality"). Dennett: they serve both as a temporal precursors as well as current components. We are descended from robots and consist of robots (DNA, macromolecules). All intentionality we enjoy is derived from the more fundamental intentionality of these billions of systems. |
Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 |
Various Authors | Locke Vs Various Authors | Danto I 112 LockeVsInnate Ideas: God created us so that we can acquire the basic ideas with our senses, therefore it would be superfluous to provide us with innate ideas. Locke I 78 Second Treatise Law/LockeVsFilmer: Adam did not obtain an absolute right of dominion over his children or the world either by paternity law or by God's positive gift. Had he possessed this, his heirs would not have possesed this. If these had attained it, there would be neither a determination of the natural nor the positive right from which it could be seen who was entitled to the right of inheritance. I 79 Legitimacy/Locke: claims to derive political violence from the "true origin": the state of nature without power. Locke I 159 Law of Nature/LockeVsGrotius: unthinkable without God's existence (Grotius: but thinkable, even if the assumption would be a great crime!). Locke II 195/196 Language/LockeVsArtificial Language: (fashion of the time, according to Leibniz, according to the algebra model): instead, analysis of the use of language, critical discussion of its function. An individual cannot reform his or her mother tongue. |
Loc III J. Locke An Essay Concerning Human Understanding Danto I A. C. Danto Connections to the World - The Basic Concepts of Philosophy, New York 1989 German Edition: Wege zur Welt München 1999 Danto III Arthur C. Danto Nietzsche as Philosopher: An Original Study, New York 1965 German Edition: Nietzsche als Philosoph München 1998 Danto VII A. C. Danto The Philosophical Disenfranchisement of Art (Columbia Classics in Philosophy) New York 2005 |
Vitalism | Dennett Vs Vitalism | Metz II 691 VsArtificial Consciousness/VsRobots/Dennett: Traditional ArgumentsVsArtificial Intelligence: 1) Robots are purely physical objects, while something immaterial is required for consciousness. DennettVs: That is Cartesian dualism. II 692 2) Robots are not organic, consciousness can only exist in organic brains. (Vitalism) DennettVsVitalism: Is deservedly dead, since the biochemistry showed that the properties in all organic compounds can be mechanistically reduced and therefore are also reproducible at any scale in another physical medium. 3) Robots are artifacts and only something natural, born may have consciousness. (Chauvinism of origin). DennettVsChauvinism of Origin/Forgery/Dennett: II 694 E.g. A fake cheap wine can also be a good wine if experts consider it good. E.g. A fake Cézanne is also a good picture, if "experts" consider it good. Dennett: but these distinctions represent a dangerous nonsense if they refer to alleged "intrinsic properties". (That means that the molecules would still needed the consecrations of a befitting birth; that would be racism). (By the way, the robot COG passes through a childhood period of learning). Forgery/Dennett: Whether a fake is produced artificially atom by atom, (but in the same molecule compounds) may have legal consequences with respect to a clone that does not deserve the same punishment. II 695 Dennett: E.g. The movie "Schindler’s List" could in principle be produced artificially through computer animation, because it only consists of two-dimensional gray tones on the screen. II 696 4) Robots will always be too simple to have consciousness. Dennett: this is the only acceptable argument, even if we try to refute it. The human body consists of trillions of individual parts. But wherever one looks, one discovers functional similarities at higher levels that allow us to replace hellishly complex modules with relatively simple ones. II 697 There is no reason to believe that any part of the brain could not be substituted. Robots/Dennett: Robot enthusiasts who believe it is easy to construct a conscious robot reveal an infantile understanding of the real world with the intricacies of consciousness. |
Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 Dennett IV Daniel Dennett "Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 |
Disputed term/author/ism | Pro/Versus |
Entry |
Reference |
---|---|---|---|
Darwinism | Versus | Dennett I 543 ChomskyVsSkinner, ChomskyVsArtificial Intelligence, ChomskyVsDarwin |
Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 |
Disputed term/author/ism | Author |
Entry |
Reference |
---|---|---|---|
Computation | Dennett, D. | Metz II 709 Sombpol/anchoring of symbols/ / AI / VsAI /VsArtificial Intelligence/ Harnad / Dennett: Thesis: internal symbols of the computer are not adequately anchored in the world - DennettVsVs: which is solved by learning - HaugelandVsAI: can not know what real pain is - DennettVsVs: do clams know what "real pain" is? |
|