Disputed term/author/ism | Author |
Entry |
Reference |
---|---|---|---|
Strong Artificial Intelligence | Dennett | Brockman I 48 Strong Artificial Intelligence/Dennett: [Weizenbaum](1) could never decide which of two theses he wanted to defend: AI is impossible! or AI is possible but evil! He wanted to argue, with John Searle and Roger Penrose, that “Strong AI” is impossible, but there are no good arguments for that conclusion Dennett: As one might expect, the defensible thesis is a hybrid: AI (Strong AI) is possible in principle but not desirable. The AI that’s practically possible is not necessarily evil - unless it is mistaken for Strong AI! E.g. IBM’s Watson: Its victory in Jeopardy! was a genuine triumph, made possible by the formulaic restrictions of the Jeopardy! rules, but in order for it to compete, even these rules had to be revised (…).Watson is not good company, in spite of misleading ads from IBM that suggest a general conversational ability, and turning Watson into a plausibly multidimensional agent would be like turning a hand calculator into Watson. Watson could be a useful core faculty for such an agent, but more like a cerebellum or an amygdala than a mind—at best, a special-purpose subsystem that could play a big supporting role (…). Brockman I 50 One can imagine a sort of inverted Turing Test in which the judge is on trial; until he or she can spot the weaknesses, the overstepped boundaries, the gaps in a system, no license to operate will be issued. The mental training required to achieve certification as a judge will be demanding. Brockman I 51 We don’t need artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights, and should not have feelings that could be hurt, or be able to respond with resentment to “abuses” rained on them by inept users.(2) Rationale/Dennett: [these agents] would not (…) share with us (..) our vulnerability or our mortality. >Robots/Dennett. 1. Weizenbaum, J. Computer Power and Human Reason. From Judgment to Calculation. San Francisco: W. H. Freeman, 1976 2. Joanna J. Bryson, “Robots Should Be Slaves,» in Close Engci.gement with Artificial Companions, YorickWilks, ed. (Amsterdam, The Netherlands: John Benjamins, 2010), 63—74; http:/I www.cs .bath.ac.uk/ —jjb/ftp/Bryson-Slaves-BookO9 .html; Joanna J. Bryson, “Patiency Is Not a Virtue: AI and the Design of Ethical Systems,” https://www.cs.bath.ac.ulc/-jjb/ftp/Bryson Patiency-AAAISS i 6.pdf [inactive]. Dennett, D. “What can we do?”, in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 Dennett II D. Dennett Kinds of Minds, New York 1996 German Edition: Spielarten des Geistes Gütersloh 1999 Dennett III Daniel Dennett "COG: Steps towards consciousness in robots" In Bewusstein, Thomas Metzinger Paderborn/München/Wien/Zürich 1996 Dennett IV Daniel Dennett "Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Strong Artificial Intelligence | Chalmers | I 314 Definition Strong Artificial Intelligence/Searle/Chalmers: Thesis: There is a non-empty class of computations so that the implementation of each operation from this class is sufficient for a mind and especially for conscious experiences. This is only true with natural necessity, because it is logically possible that any computation can do without consciousness, but this also applies to brains. >Consciousness/Chalmers, >Consciousness, >Mind, >Experience, >Computation, >Information Processing, >Brain. I 320 A computational description of a system provides a formal description of the causal organization of this system. >Artificial intelligence, >Computer model, cf. >Neural networks. I 321 Invariance principle: every system with conscious experiences, which has the same functional organization as another system with conscious experiences, will have qualitatively identical conscious experiences. There may be corresponding causal relations between electronic components like there is between neurons in the brain. Fading Qualia/dancing Qualia: we can use these kinds of qualia for arguments for the strong artificial intelligence. >Qualia/Chalmers. I 322 If there were two organizationally identical systems, one of which had conscious experiences, and the other not, one could construct a system with fading or dancing qualia that lay between these two systems. That would be implausible. If fading and dancing qualia are excluded, the thesis of the Strong Artificial Intelligence applies. (> Qualia/Chalmers). I 329 VsArtificial Intelligence/Goedel/Chalmers: in a consistent formal system which is expressive enough for a certain kind of arithmetic, one can construct a sentence which is not provable in this system. Contrary to the machine, the human being can see that the sentence is true. >Provability. I 330 Therefore the human has an ability which the formal system does not have. ChalmersVsVs: there is no reason to believe that the human is aware of the truth of the sentence. At best, we can say that if the system is consistent, the sentence is true. We cannot always determine the consistency of complex systems. >Consistency. PenroseVsArtificial Intelligence/Chalmers: (Penrose 1994)(1) brings an argument on a lower level: it may be that not all physical processes are computable. >Calculabilty. ChalmersVsVs: But this is based on the above mentioned Goedel argument. Nothing in physical theory itself supports it. VsArtificial Intelligence/VsSimulation/Chalmers: what if consciousness processes are essentially continuous, but our simulations are discrete? >Simulation. I 331 ChalmersVsVs: there are reasons to assume that absolute continuity is not essential for our cognitive competence. However, it might be that a system with unlimited precision (achieved by continuity) has cognitive abilities that a discrete system does not achieve. Cf. >Analog/digital. 1. R. Penrose, Shadows of the Mind, Oxford 1994 |
Cha I D. Chalmers The Conscious Mind Oxford New York 1996 Cha II D. Chalmers Constructing the World Oxford 2014 |
Strong Artificial Intelligence | Weizenbaum | Brockman I 48 Strong Artificial Intelligence/Weizenbaum: [Weizenbaum](1) could never decide which of two theses he wanted to defend: AI is impossible! or AI is possible but evil! He wanted to argue, with John Searle and Roger Penrose, that “Strong AI” is impossible, but there are no good arguments for that conclusion. Dennett: After all, everything we now know suggests that, as I have put it, we are robots made of robots made of robots ... down to the motor proteins and their ilk, with no magical ingredients thrown in along the way. >Strong AI/Dennett, >J. Searle. 1. Weizenbaum, J. Computer Power and Human Reason. From Judgment to Calculation. San Francisco: W. H. Freeman, 1976 Dennett, D. “What can we do?” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Weizenbaum I Joseph Weizenbaum Computer Power and Human Reason. From Judgment to Calculation, W. H. Freeman & Comp. 1976 German Edition: Die Macht der Computer und die Ohnmacht der Vernunft Frankfurt/M. 1978 Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Strong Artificial Intelligence | Pearl | Brockman I 15 Strong Artificial Intelligence/Pearl: [questions like “What if”] serve as a basis for Strong AI - that is, artificial intelligence that emulates human-level reasoning and competence. To achieve human-level intelligence, learning machines need the guidance of a blueprint of reality, a model—similar to a road map that guides us in driving through an unfamiliar city. >Machine learning/Pearl, >Counterfactuals/Pearl, >Models/Pearl. Pearl, Judea.”The Limitations of Opaque Learning Machines.” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Disputed term/author/ism | Author Vs Author |
Entry |
Reference |
---|---|---|---|
Chomsky, N. | Dennett Vs Chomsky, N. | I 513 Chomsky: early thesis the brain works in a way that ultimately defies scientific analysis. Even >Fodor. Also >McGinn. DennetVsChomsky / DennettVsFodor: this is a kind saltationist view about the mind: they postulated cracks in the design space, and is therefore not Darwinian. Dennett: Chomsky actually represents quite a Darwinian view of the theory of language, but he has always shunned these views, like Gould. I 531 "Cognitive lock"/Independence/Chomsky/McGinn: Spiders can't think about fishing. That's how it is for us: the question of free will may not be solvable for us. McGinn/Fodor: human consciousness is such a mystery. I 533 Cognitive lock/DennettVsMcGinn: the situation for the monkey is different: he can not even understand the question. He is not even shocked! Neither Chomsky nor Fodor can cite cases from animals to which certain matters are a mystery. In reality, not as they represented a biological, but a pseudo-biological problem. It ignores even a biological accident: we can certainly find an intelligence scale in the living world. I 534 Consciousness/DennettVsMcGinn: apart from problems that are not solvable in the lifetime of the universe, our consciousness is still developing as we can not even imagine today. Why Chomsky and Fodor do not like this conclusion? They hold the means for unsatisfactory. If our mind is not based on skyhook but on cranes, they would like to keep it secret. I 556 DennettVsChomsky: he is wrong if he thinks a description at the level of machines is conclusive, because that opens the door for >"Strong Artificial Intelligence". |
Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 Dennett II D. Dennett Kinds of Minds, New York 1996 German Edition: Spielarten des Geistes Gütersloh 1999 Dennett III Daniel Dennett "COG: Steps towards consciousness in robots" In Bewusstein, Thomas Metzinger Paderborn/München/Wien/Zürich 1996 Dennett IV Daniel Dennett "Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 |
Disputed term/author/ism | Author |
Entry |
Reference |
---|---|---|---|
Artificial Intelligence | Searle, J.R. | I 19 Def "strong artificial intelligence" thesis: that a computer might even have thoughts, feelings and understanding - and this is due to the fact that it executes a suitable computer program with the appropriate inputs and outputs. (Most famous and widespread view) of Searle is called "strong artificial intelligence" (strong AI). Also called "computer functionalism". I 60 Artificial Intelligence/Thesis: the mind behaves to the brain as the program behaves to the hardware. One could be a materialist through and through and at the same time - like Descartes - be of the opinion that the brain is not really important for the mind. Thus one can indicate and understand the typical mental aspects of the mind without knowing how the brain functions. Even as a materialist one does not need to explore the brain to explore the mind. I 61 Thus the new discipline of "cognitive science" was born. (SearleVs). I 227 Def Strong Artificial Intelligence (AI): having a mind means having a program, there is nothing more about the mind. Def Weak AI: Brain processes can be simulated by means of a computer. Def Cognitivism: The idea that the brain is a digital computer. Def Church-Turing-Thesis: for each algorithm there is a Turing machine. Def Turing Thesis: there is a universal Turing machine that can simulate any Turing machine. Perler/Wild I 145 "Strong AI"/Searle: expression of traditional dualism: that the specific neurobiology of the brain is not important. |
|