Dictionary of Arguments


Philosophical and Scientific Issues in Dispute
 
[german]

Screenshot Tabelle Begriffes

 

Find counter arguments by entering NameVs… or …VsName.

Enhanced Search:
Search term 1: Author or Term Search term 2: Author or Term


together with


The author or concept searched is found in the following 11 entries.
Disputed term/author/ism Author
Entry
Reference
Artificial Intelligence Searle I 60
Artificial Intelligence/AI/thesis: the mind acts to the brain like the program to hardware. Different material structures can be mentally equivalent if they are different hardware versions of the same computer program. The brain is then not important for the mind. This was one of the most exciting developments in the two thousand year history of materialism. The science of artificial intelligence offered an answer to the question: different material structures can be mentally equivalent if they are different hardware executions of the same computer program.
AI thesis: the mind behaves to the brain as the program behaves to the hardware. One could be a materialist through and through and at the same time - like Descartes - be of the opinion that the brain is not really important for the mind.
In this way one can indicate and understand the typical spiritual aspects of the mind without knowing how the brain functions. Even as a materialist one does not need to explore the brain to explore the mind. >Materialism as a concept, >positions of materialism.
I 61
SearleVs: see > href="https://philosophy-science-humanities-controversies.com/listview-list.php?concept=Chinese+Room">Chinese Room. VsArtificial Intelligence: objection of common sense: the computer model of the mind ignores decisive factors, such as consciousness and intentionality.
I 227
Definition strong artificial intelligence/Searle: the mind is like a program. Definition weak artificial intelligence: brain processes can be simulated with computers. Definition Cognitivism: the brain is like a computer.
I 228
Artificial Intelligence: semantics is completely mirrored in syntax (proof theory). SearleVs: It is not to decide empirically what is: program, algorithm, computer - that is description dependent.
Perler I 145
"Strong artificial intelligence"/Searle: expression of traditional dualism: that the specific neurobiology of the brain is not important.

Searle I
John R. Searle
The Rediscovery of the Mind, Massachusetts Institute of Technology 1992
German Edition:
Die Wiederentdeckung des Geistes Frankfurt 1996

Searle II
John R. Searle
Intentionality. An essay in the philosophy of mind, Cambridge/MA 1983
German Edition:
Intentionalität Frankfurt 1991

Searle III
John R. Searle
The Construction of Social Reality, New York 1995
German Edition:
Die Konstruktion der gesellschaftlichen Wirklichkeit Hamburg 1997

Searle IV
John R. Searle
Expression and Meaning. Studies in the Theory of Speech Acts, Cambridge/MA 1979
German Edition:
Ausdruck und Bedeutung Frankfurt 1982

Searle V
John R. Searle
Speech Acts, Cambridge/MA 1969
German Edition:
Sprechakte Frankfurt 1983

Searle VII
John R. Searle
Behauptungen und Abweichungen
In
Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995

Searle VIII
John R. Searle
Chomskys Revolution in der Linguistik
In
Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995

Searle IX
John R. Searle
"Animal Minds", in: Midwest Studies in Philosophy 19 (1994) pp. 206-219
In
Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005


Perler I
Dominik Perler
Markus Wild
Der Geist der Tiere Frankfurt 2005
Artificial Intelligence Chalmers I 185
Artificial Intelligence/Chalmers: Suppose we had an artificial system that rationally reflects what it perceives. Would this system have a concept of consciousness? It would certainly have a concept of the self, it could differ from the rest of the world, and have a more direct access to its own cognitive contents than to that of others. So it would have a certain kind of self-awareness. This system will not say about itself, that it would have no idea how it is to see a red triangle. Nor does it need access to its elements on a deeper level (Hofstadter 1979 1, Winograd 1972 2). N.B.: such a system would have a similar attitude to its inner life as we do to ours.
---
I 186
Behavioral explanation/Chalmers: to explain the behavior of such systems, we never need to attribute consciousness. Perhaps such systems have consciousness, or not, but the explanation of their behavior is independent of this. ---
I 313
Artificial Intelligence/VsArtificial Intelligence/Chalmers: DreyfusVsArtificial Intelligence: (Dreyfus 1972 7): Machines cannot achieve the flexible and creative behavior of humans. LucasVsArtificial Intelligence/PenroseVsArtificial Intelligence/Chalmers: (Lucas 1961 3, Penrose, 1989 4): Computers can never reach the mathematical understanding of humans because they are limited by Goedel's Theorem in a way in which humans are not. Chalmers: these are external objections. The internal objections are more interesting:
VsArtificial intelligence: internal argument: conscious machines cannot develop a mind. SearleVsArtificial Intelligence: > Chinese Room Argument. (Searle 1980 5). According to that, a computer is at best a simulation of consciousness, a zombie.
Artificial Intelligence/ChalmersVsSearle/ChalmersVsPenrose/ChalmersVsDreyfus: it is not obvious that certain physical structures in the computer lead to consciousness, the same applies to the structures in the brain.
---
I 314
Definition Strong Artificial Intelligence/Searle/Chalmers: Thesis: There is a non-empty class of computations so that the implementation of each operation from this class is sufficient for a mind and especially for conscious experiences. This is only true with natural necessity, because it is logically possible that any compuation can do without consciousness, but this also applies to brains. ---
I 315
Implementation/Chalmers: this term is needed as a bridge for the connection between abstract computations and concrete physical systems in the world. We also sometimes say that our brain implements calculations. Implementation/Searle: (Searle 1990b 6): Thesis is an observational-relativistic term. If you want, you can consider every system as implementing, for example: a wall.
ChalmersVsSearle: one has to specify the implementation, then this problem is avoided.
---
I 318
For example, a combinatorial state machine has quite different implementation conditions than a finite state machine. The causal interaction between the elements is differently fine-grained. In addition, combinatorial automats can reflect various other automats, like... ---
I 319
...Turing machines and cellular automats, as opposed to finite or infinite state automats. ChalmersVsSearle: each system implements one or the other computation. Only not every type (e.g., a combinational state machine) is implemented by each system. Observational relativity remains, but it does not threaten the possibility of artificial intelligence.
---
I 320
This does not say much about the nature of the causal relations.

1. D. R. Hofstadter Gödel, Escher Bach, New York 1979
2. T. Winograd, Understanding Natural Language, New York 1972
3. J. R. Lucas, Minds, machines and Gödel, Philosophy 36, 1961, p. 112-27.
4. R. Penrose, The Emperor's New Mind, Oxford 1989
5. J. R. Searle, Minds, brains and programs. Behavioral and Brain Sciences 3, 1980: pp. 417 -24
6. J. R. Searle, Is the brain an digital computer? Proceedings and Adresses of the American Philosophical association, 1990, 64: pp. 21-37
7. H. Dreyfus, What Computers Can't Do. New York 1972.

Cha I
D. Chalmers
The Conscious Mind Oxford New York 1996

Cha II
D. Chalmers
Constructing the World Oxford 2014

Chinese Room Chalmers I 323
Chinese Room/Searle/Chalmers: Searle's argument is directed against the possibility of understanding or intentionality. ChalmersVsSearle: we separate intentionality and understanding from the possibility of having conscious experiences. We split Searle's argument into two parts:
(1) No program achieves consciousness.
(2) No program achieves intentionality (understanding).
Searle believes that (1) implies (2), others doubt that.
Strong artificial intelligence: if (1) is true, the strong Artificial Intelligence thesis fails, but if (1) can be refuted, even Searle would accept that the Chinese Room argument failed. The connection of consciousness and understanding can be set aside, it is not a decisive argument against artificial intelligence.
FodorVsChinese Room: (Fodor 1980) 1: Fordor considers the connection to the environment of the system.
ReyVsChinese Room: (Rey 1986) 2 dito.
BodenVsChinese Room: (Boden 1988) 3 Boden shows functional or procedural approaches of intentionality.
ThagardVsChinese Room: (Thagard 1986) 4 dito.
Chalmers: it is about intentionality (understanding) and does not refute the possibility of consciousness (conscious experiences).
Chinese Room/Chalmers: the argument states that a program is not sufficient, e.g. for the experience of a red object when implemented in a black and white environment. Then consciousness needs more than one relevant program.
Strong Artificial IntelligenceVsChinese Room/Strong Artificial IntelligenceVsSearle: it is the whole system to which you have to attribute consciousness, not the individual elements.
SearleVsVs: that is implausible. Chalmers: in fact, it is implausible, if the inhabitant of the room should have no consciousness, but the inhabitant together with the paper.
---
I 324
Disappearing Qualia: the argument can also be applied to the Chinese Room (... + ...) ---
I 325
Dancing Qualia: dito (... + ...) Conclusion/Chalmers: a system of demons and paper snippets both of which can reduce the number of demons and snippets, has the same conscious experiences as e.g. to understand Chinese or to see something red.
Chinese Room/Chalmers: 1. As described by Searle, the stack of paper is not a simple stack, but a dynamic system of symbol manipulation.
2. The role of the inhabitant (in our variant: the demon, which can be multiplied) is quite secondary.
When we look at the causal dynamics between the symbols, it is no longer so implausible to ascribe consciousness to the system.
---
I 326
The inhabitant is only a kind of causal mediator.

1. J. Fodor, Searle on what only brains can do. Behavioral and Brain sciences 3, 1980, pp. 431-32
2. G. Rey, Waht's really going on in Searle's "Chinese Room", Philosophical Studies 50, 1986: pp. 169-85.
3. M. Boden, Escaping from the Chinese Room, in: Computer Models of Mind, Cambridge 1988.
4. P. Thagard, The emergence of meaning: An escape from Searle's Chinese Room. Behaviorism 14, 1986: pp. 139-46.

Cha I
D. Chalmers
The Conscious Mind Oxford New York 1996

Cha II
D. Chalmers
Constructing the World Oxford 2014

Cognition Searle I 225f
SearleVsCognition: the brain is like a computer: that is not the question but is the mind like a program? Answer: no! Simulation: yes! The mind has an intrinsical mental content, therefore there is no program. A program is syntactically or formally defined; the mind has intrinsically spiritual content. It follows immediately from this that the program itself cannot constitute the mind. The formal syntax of the program does not guarantee the existence of spiritual content by itself. (>Chinese room).
I 226
Church thesis: everything can be simulated on a digital computer, which can be characterized with sufficient precision as a sequence of steps. Searle: the brain activities can be simulated in the same sense on a digital computer, by also working with weather conditions, the stock exchange or air traffic.
So the question is not: is the mind a program, but is the brain a digital computer?
It could be that states of mind are at least once computational states. That seems to be the view of quite a few people.
I 227
Def Strong Artificial Intelligence (AI): having a mind means having a program, and more is not on the mind. Def Weak AI: brain processes can be simulated using a computer.
Def Cognitivism: cognitivism is the view that the brain is a digital computer.
I 228
What about semantics? After all, programs are purely syntactic. Answer of the AI: the development of proof theory has shown that semantic relations can be reflected completely by the syntactic relations that exist between the propositions. And this is exactly what a computer does: it implements evidence theory!
The content of syntactic objects, if any, is irrelevant to how they are processed.
I 229
Note in particular Turing's comparison of conscious program implementation by the human computer and unconscious program implementation by the brain or by a mechanical computer. Furthermore, note the idea that we might discover programs that we have put into our mechanical computers.
(1) It is often suggested that some dualism is the only alternative to the view that the brain is a digital computer.
(2) It is also assumed that the question of whether brain processes are computational is simply an empirical question.
It is as much to be decided by investigation as the question of whether the heart is a pump or not.
I 230
The question of whether the brain is actually a computer is, in her opinion, just as little a philosophical question as the question of chemical processes. Searle: for me, this is a mystery: what kind of fact that concerns the brain could make it a computer?
It is assumed that somehow somebody must have done the basic philosophical work of linking mathematics with electrical engineering. But as far as I can see, this is not the case.
There is little theoretical agreement on absolutely fundamental questions: what exactly is a digital computer? >Computer model, >computation.

Searle I
John R. Searle
The Rediscovery of the Mind, Massachusetts Institute of Technology 1992
German Edition:
Die Wiederentdeckung des Geistes Frankfurt 1996

Searle II
John R. Searle
Intentionality. An essay in the philosophy of mind, Cambridge/MA 1983
German Edition:
Intentionalität Frankfurt 1991

Searle III
John R. Searle
The Construction of Social Reality, New York 1995
German Edition:
Die Konstruktion der gesellschaftlichen Wirklichkeit Hamburg 1997

Searle IV
John R. Searle
Expression and Meaning. Studies in the Theory of Speech Acts, Cambridge/MA 1979
German Edition:
Ausdruck und Bedeutung Frankfurt 1982

Searle V
John R. Searle
Speech Acts, Cambridge/MA 1969
German Edition:
Sprechakte Frankfurt 1983

Searle VII
John R. Searle
Behauptungen und Abweichungen
In
Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995

Searle VIII
John R. Searle
Chomskys Revolution in der Linguistik
In
Linguistik und Philosophie, G. Grewendorf/G. Meggle Frankfurt/M. 1974/1995

Searle IX
John R. Searle
"Animal Minds", in: Midwest Studies in Philosophy 19 (1994) pp. 206-219
In
Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005

Functionalism Chalmers I 15
Functionalism/Lewis/Armstrong/Chalmers: Lewis and Armstrong tried to explain all mental concepts, not only some. ChalmersVsLewis/ChalmersVsArmstrong: both authors made the same mistake like Descartes in assimilating the psychological to the phenomenal (see ChalmersVsDescartes).
E.g. When we wonder whether somebody is having a colour experience, we are not wondering whether they are receiving environmental stimulation and processing it in a certain way. It is a conceptually coherent possibility that something could be playing the causal role without there being an associated experience.
---
I 15
Functionalism/Consciousness/ChalmersVsFunctionalism/ChalmersVsArmstrong/ChalmersVsLewis/Chalmers: There is no mystery about whether any state plays a causal role, at most there are a few technical explanatory problems. Why there is a phenomenological quality of consciousness involved is a completely different question. Functionalism/Chalmers: he denies that there are two different questions. ((s) Also: ChalmersVsDennett).
---
I 231
Functionalism/Consciousness/Chalmers: two variants: Functionalism of the 2nd level: Among these, Rosenthal's approach of thoughts of the second level about conscious experiences and Lycan's (1995) (1) approach about perceptions of the second level. These theories give good explanations for introspection.
Functionalism of the 1st level : thesis: only cognitive states of the 1st level are used. Such theories are better in the explanation of conscious experiences.
Since, however, not all cognitive states correspond to conscious experiences, one still needs a distinguishing feature for them.
Solution/Chalmers: my criterion for this is the accessibility to global control.
---
I 232
Kirk: (1994) (2): Thesis: "directly active" information is what is needed. Dretske: (1995) (3): Thesis: Experience is information that is represented for a system.
Tye: (1995) (4): Thesis: Information must be "balanced" for purposes of cognitive processing.
---
I 250
Functionalism/VsFunctionalism/Chalmers: the authors who argue with inverted Qualia or lacking Qualia present the logical possibility of counter-arguments. This is sufficient in the case of a strong functionalism. The invariance principle (from which it follows that conscious experiences are possible in a system with identical biochemical organization) is a weaker functionalism. Here the merely logical possibility of counter examples is not sufficient to refute. Instead, we need a natural possibility of missing or inverted Qualia.
Solution: to consider natural possibility, we will accept fading or "dancing" Qualia.
---
I 275
Functionalism/Chalmers: the arguments in relation to a lacking, inverted and dancing Qualia do not support a strong, but the non-reductive functionalism I represent. Thesis: functional organization is, with natural necessity, sufficient for conscious experiences. This is a strong conclusion that strengthens the chances for > artificial intelligence. See also Strong Artificial Intelligence/Chalmers.


1. W. G. Lycan, A limited defense of phenomenal information. In: T. Metzingwr (ed), Conscious Experience, Paderborn 1995.
2. R. Kirk, Raw Feeling: A Philosophical Account of the Essence of Consciousness. Oxford 1994.
3. F. Dretske, Naturalizing the Mind, Cambridge 1995
4. M. Tye, Ten Problems of Consciousness, Cambridge 1995.

Cha I
D. Chalmers
The Conscious Mind Oxford New York 1996

Cha II
D. Chalmers
Constructing the World Oxford 2014

Machine Learning Pearl Brockman I 15
Machine learning/Pearl: Once you unleash it on large data, deep learning has its own dynamics, it does its own repair and its own optimization, and it gives you the right results most of the time. But when it doesn’t, you don’t have a clue about what went wrong and what should be fixed. In particular, you do not know if the fault is in the program, in the method, or because things have changed in the environment. We should be aiming at a different kind of transparency. VsPearl: Some argue that transparency is not really needed. We don’t understand the neural architecture of the human brain, yet it runs well, so we forgive our meager understanding and use human helpers to great advantage.
PearlVsVs: I know that nontransparent systems can do marvelous jobs, and our brain is proof of that marvel. But this argument has its limitations. The reason we can forgive our meager understanding of how human brains work is because our brains work the same way, and that enables us to communicate with other humans, learn from them, instruct them, and motivate them in our own native language.
Problem: If our robots will all be as opaque as AlphaGo, we won’t be able to hold a meaningful conversation with them, and that would be unfortunate. We will need to retrain them whenever we make a slight change in the task or in the operating environment.
Current machine-learning systems operate almost exclusively in a statistical, or model-blind, mode, which is analogous in many ways to fitting a function to a cloud of data points. Such systems cannot reason about “What if?” questions and, therefore, cannot serve as the basis for Strong AI—that is, artificial intelligence that emulates human-level reasoning and competence. >Strong Artificial Intelligence.
Brockman I 16
(…) current learning machines improve their performance by optimizing parameters for a stream of sensory inputs received from the environment. It is a slow process, analogous to the natural-selection process that drives Darwinian evolution. It explains how species like eagles and snakes have developed superb vision systems over millions of years. It cannot explain, however, the super-evolutionary process that enabled humans to build eyeglasses and telescopes over barely a thousand years.
Brockman I 17
First level: statistical reasoning, which can tell you only how seeing one event would change your belief about another. Second level: deals with actions. (…) [it] requires information about interventions that is not available in the first [level]. This information can be encoded in a graphical model, which merely tells us which variable responds to another.
Third level: (…) the counterfactual. This is the language used by scientists. “What if the object were twice as heavy?” “What if I were to do things differently?”
Counterfactuals/Pearl: they cannot be derived even if we could predict the effects of all actions. They need an extra ingredient, in the form of equations, to tell us how variables respond to changes in other variables. >Models/Pearl.


Pearl, Judea.”The Limitations of Opaque Learning Machines.” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press.


Brockman I
John Brockman
Possible Minds: Twenty-Five Ways of Looking at AI New York 2019
Strong Artificial Intelligence
Strong Artificial Intelligence Dennett Brockman I 48
Strong Artificial Intelligence/Dennett: [Weizenbaum](1) could never decide which of two theses he wanted to defend: AI is impossible! or AI is possible but evil! He wanted to argue, with John Searle and Roger Penrose, that “Strong AI” is impossible, but there are no good arguments for that conclusion Dennett: As one might expect, the defensible thesis is a hybrid: AI (Strong AI) is possible in principle but not desirable. The AI that’s practically possible is not necessarily evil - unless it is mistaken for Strong AI!
E.g. IBM’s Watson: Its victory in Jeopardy! was a genuine triumph, made possible by the formulaic restrictions of the Jeopardy! rules, but in order for it to compete, even these rules had to be revised (…).Watson is not good company, in spite of misleading ads from IBM that suggest a general conversational ability, and turning Watson into a plausibly multidimensional agent would be like turning a hand calculator into Watson. Watson could be a useful core faculty for such an agent, but more like a cerebellum or an amygdala than a mind—at best, a special-purpose subsystem that could play a big supporting role (…).
Brockman I 50
One can imagine a sort of inverted Turing Test in which the judge is on trial; until he or she can spot the weaknesses, the overstepped boundaries, the gaps in a system, no license to operate will be issued. The mental training required to achieve certification as a judge will be demanding.
Brockman I 51
We don’t need artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights, and should not have feelings that could be hurt, or be able to respond with resentment to “abuses” rained on them by inept users.(2) Rationale/Dennett: [these agents] would not (…) share with us (..) our vulnerability or our mortality. >Robots/Dennett.


1. Weizenbaum, J. Computer Power and Human Reason. From Judgment to Calculation. San Francisco: W. H. Freeman, 1976
2. Joanna J. Bryson, “Robots Should Be Slaves,» in Close Engci.gement with Artificial Companions,
YorickWilks, ed. (Amsterdam, The Netherlands: John Benjamins, 2010), 63—74; http:/I
www.cs .bath.ac.uk/ —jjb/ftp/Bryson-Slaves-BookO9 .html; Joanna J. Bryson, “Patiency Is Not
a Virtue: AI and the Design of Ethical Systems,” https://www.cs.bath.ac.ulc/-jjb/ftp/Bryson
Patiency-AAAISS i 6.pdf [inactive].


Dennett, D. “What can we do?”, in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press.

Dennett I
D. Dennett
Darwin’s Dangerous Idea, New York 1995
German Edition:
Darwins gefährliches Erbe Hamburg 1997

Dennett II
D. Dennett
Kinds of Minds, New York 1996
German Edition:
Spielarten des Geistes Gütersloh 1999

Dennett III
Daniel Dennett
"COG: Steps towards consciousness in robots"
In
Bewusstein, Thomas Metzinger Paderborn/München/Wien/Zürich 1996

Dennett IV
Daniel Dennett
"Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350
In
Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005


Brockman I
John Brockman
Possible Minds: Twenty-Five Ways of Looking at AI New York 2019
Strong Artificial Intelligence Chalmers I 314
Definition Strong Artificial Intelligence/Searle/Chalmers: Thesis: There is a non-empty class of computations so that the implementation of each operation from this class is sufficient for a mind and especially for conscious experiences. This is only true with natural necessity, because it is logically possible that any computation can do without consciousness, but this also applies to brains. ---
I 320
A computational description of a system provides a formal description of the causal organization of this system. ---
I 321
Invariance principle: every system with conscious experiences, which has the same functional organization as another system with conscious experiences, will have qualitatively identical conscious experiences. There may be corresponding causal relations between electronic components like there is between neurons in the brain. Fading Qualia/dancing Qualia: we can use these kinds of qualia for arguments for the strong artificial intelligence.
---
I 322
If there were two organizationally identical systems, one of which had conscious experiences, and the other not, one could construct a system with fading or dancing qualia that lay between these two systems. That would be implausible. If fading and dancing qualia are excluded, the thesis of the Strong Artificial Intelligence applies. (> Qualia/Chalmers). ---
I 329
VsArtificial Intelligence/Goedel/Chalmers: in a consistent formal system which is expressive enough for a certain kind of arithmetic, one can construct a sentence which is not provable in this system. Contrary to the machine, the human being can see that the sentence is true. ---
I 330
Therefore the human has an ability which the formal system does not have. ChalmersVsVs: there is no reason to believe that the human is aware of the truth of the sentence. At best, we can say that if the system is consistent, the sentence is true. We cannot always determine the consistency of complex systems.
PenroseVsArtificial Intelligence/Chalmers: (Penrose 1994)(1) brings an argument on a lower level: it may be that not all physical processes are computable. ChalmersVsVs: But this is based on the above mentioned Goedel argument. Nothing in physical theory itself supports it.
VsArtificial Intelligence/VsSimulation/Chalmers: what if consciousness processes are essentially continuous, but our simulations are discrete?
---
I 331
ChalmersVsVs: there are reasons to assume that absolute continuity is not essential for our cognitive competence. However, it might be that a system with unlimited precision (achieved by continuity) has cognitive abilities that a discrete system does not achieve.


1. R. Penrose, Shadows of the Mind, Oxford 1994

Cha I
D. Chalmers
The Conscious Mind Oxford New York 1996

Cha II
D. Chalmers
Constructing the World Oxford 2014

Strong Artificial Intelligence Weizenbaum Brockman I 48
Strong Artificial Intelligence/Weizenbaum: [Weizenbaum](1) could never decide which of two theses he wanted to defend: AI is impossible! or AI is possible but evil! He wanted to argue, with John Searle and Roger Penrose, that “Strong AI” is impossible, but there are no good arguments for that conclusion. Dennett: After all, everything we now know suggests that, as I have put it, we are robots made of robots made of robots ... down to the motor proteins and their ilk, with no magical ingredients thrown in along the way. >Strong AI/Dennett.


1. Weizenbaum, J. Computer Power and Human Reason. From Judgment to Calculation. San Francisco: W. H. Freeman, 1976

Dennett, D. “What can we do?” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press.

Weizenbaum I
Joseph Weizenbaum
Computer Power and Human Reason. From Judgment to Calculation, W. H. Freeman & Comp. 1976
German Edition:
Die Macht der Computer und die Ohnmacht der Vernunft Frankfurt/M. 1978


Brockman I
John Brockman
Possible Minds: Twenty-Five Ways of Looking at AI New York 2019
Strong Artificial Intelligence Pearl Brockman I 15
Strong Artificial Intelligence/Pearl: [questions like “What if”] serve as a basis for Strong AI - that is, artificial intelligence that emulates human-level reasoning and competence. To achieve human-level intelligence, learning machines need the guidance of a blueprint of reality, a model—similar to a road map that guides us in driving through an unfamiliar city. >Machine learning/Pearl, >Counterfactuals/Pearl, >Models/Pearl.

Pearl, Judea.”The Limitations of Opaque Learning Machines.” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press.


Brockman I
John Brockman
Possible Minds: Twenty-Five Ways of Looking at AI New York 2019

The author or concept searched is found in the following controversies.
Disputed term/author/ism Author Vs Author
Entry
Reference
Chomsky, N. Dennett Vs Chomsky, N. I 513
Chomsky: early thesis the brain works in a way that ultimately defies scientific analysis. Even >Fodor. Also >McGinn. DennetVsChomsky / DennettVsFodor: this is a kind saltationist view about the mind: they postulated cracks in the design space, and is therefore not Darwinian.
Dennett: Chomsky actually represents quite a Darwinian view of the theory of language, but he has always shunned these views, like Gould.
I 531
"Cognitive lock"/Independence/Chomsky/McGinn: Spiders can't think about fishing. That's how it is for us: the question of free will may not be solvable for us. McGinn/Fodor: human consciousness is such a mystery.
I 533
Cognitive lock/DennettVsMcGinn: the situation for the monkey is different: he can not even understand the question. He is not even shocked! Neither Chomsky nor Fodor can cite cases from animals to which certain matters are a mystery. In reality, not as they represented a biological, but a pseudo-biological problem. It ignores even a biological accident: we can certainly find an intelligence scale in the living world.
I 534
Consciousness/DennettVsMcGinn: apart from problems that are not solvable in the lifetime of the universe, our consciousness is still developing as we can not even imagine today.   Why Chomsky and Fodor do not like this conclusion? They hold the means for unsatisfactory. If our mind is not based on skyhook but on cranes, they would like to keep it secret.
I 556
DennettVsChomsky: he is wrong if he thinks a description at the level of machines is conclusive, because that opens the door for >"Strong Artificial Intelligence".

Dennett I
D. Dennett
Darwin’s Dangerous Idea, New York 1995
German Edition:
Darwins gefährliches Erbe Hamburg 1997

Dennett II
D. Dennett
Kinds of Minds, New York 1996
German Edition:
Spielarten des Geistes Gütersloh 1999

Dennett III
Daniel Dennett
"COG: Steps towards consciousness in robots"
In
Bewusstein, Thomas Metzinger Paderborn/München/Wien/Zürich 1996

Dennett IV
Daniel Dennett
"Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350
In
Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005

The author or concept searched is found in the following theses of the more related field of specialization.
Disputed term/author/ism Author
Entry
Reference
Artificial Intelligence Searle, J.R. I 19
Def "strong artificial intelligence" thesis: that a computer might even have thoughts, feelings and understanding - and this is due to the fact that it executes a suitable computer program with the appropriate inputs and outputs. (Most famous and widespread view) of Searle is called "strong artificial intelligence" (strong AI). Also called "computer functionalism".
I 60
Artificial Intelligence/Thesis: the mind behaves to the brain as the program behaves to the hardware. One could be a materialist through and through and at the same time - like Descartes - be of the opinion that the brain is not really important for the mind. Thus one can indicate and understand the typical mental aspects of the mind without knowing how the brain functions. Even as a materialist one does not need to explore the brain to explore the mind.
I 61
Thus the new discipline of "cognitive science" was born. (SearleVs).
I 227
Def Strong Artificial Intelligence (AI): having a mind means having a program, there is nothing more about the mind. Def Weak AI: Brain processes can be simulated by means of a computer.
Def Cognitivism: The idea that the brain is a digital computer.
Def Church-Turing-Thesis: for each algorithm there is a Turing machine.
Def Turing Thesis: there is a universal Turing machine that can simulate any Turing machine.
Perler/Wild I 145
"Strong AI"/Searle: expression of traditional dualism: that the specific neurobiology of the brain is not important.