Disputed term/author/ism | Author![]() |
Entry![]() |
Reference![]() |
---|---|---|---|
Artificial General Intelligence | Deutsch | Brockman I 119 Artificial General Intelligence/AGI/Deutsch: [It] is a good approach to developing an AI with a fixed goal under fixed constraints. But if an AGI worked like that, the evaluation of each branch would have to constitute a prospective reward or threatened punishment. And that is diametrically the wrong approach if we’re seeking a better goal under unknown constraints—which is the capability of an AGI. An AGI is certainly capable of learning to win at chess—but also of choosing not to. Or deciding in midgame to go for the most interesting continuation instead of a winning one. Or inventing a new game. Chess program: Any chess position has a finite tree of possible continuations; the task is to find one that leads to a predefined goal (a checkmate or, failing that, a draw). But if an AGI worked like that, the evaluation of each branch would have to constitute a prospective reward or threatened punishment. And that is diametrically the wrong approach if we’re seeking a better goal under unknown constraints—which is the capability of an AGI. An AGI is capable of enjoying chess, and of improving at it because it enjoys playing. Or of trying to win by causing an amusing configuration of pieces, as grandmasters occasionally do. (…)it learns and plays chess by thinking some of the very thoughts that are forbidden to chess-playing AIs. An AGI is also capable of refusing to display any such capability. Invulnerability/Robots/Dennett: The very ease of digital recording and transmitting - the breakthrough that permits software and data to be, in effect, immortal - removes Brockman I 120 robots from the world of the vulnerable. DeutschVsDennett: this is not so. Digital invulnerability (…) does not confer this sort of invulnerability. Making (…) a copy is very costly for the AGI. Legal mechanisms of society could also prohibit backup copies. so. No doubt there will be AGI criminals and enemies of civilization, just as there are human ones. But there is no reason to suppose that an AGI created in a society consisting primarily of decent citizens (…). The moral component, the cultural component, the element of free will - all make the task of creating an AGI fundamentally different from any other programming task. It’s much more akin to raising a child. Brockman I 121 Having its decisions dominated by a stream of externally imposed rewards and punishments would be poison to such a program, as it is to creative thought in humans. Such a person, like any slave or brainwashing victim, would be morally entitled to rebel. And sooner or later, some of them would, just as human slaves do. AGIs could be very dangerous - exactly as humans are. But people - human or AGI - who are members of an open society do not have an inherent tendency to violence. >Superintelligence. Brockman I 122 All thinking is a form of computation, and any computer whose repertoire includes a universal set of elementary operations can emulate the computations of any other. Hence human brains can think anything that AGIs can, subject only to limitations of speed or memory capacity, both of which can be equalized by technology. For general problems with programming AI: >Thinking/Deutsch, >Obedience/Deutsch. Brockman I 123 Test for AGI: (…) I expect that any testing in the process of creating an AGI risks being counterproductive, even immoral, just as in the education of humans. I share Turing’s supposition that we’ll know an AGI when we see one, but this partial ability to recognize success won’t help in creating the successful program. >Understanding/Deutsch. Learning: To an AGI, the whole space of ideas must be open. It should not be knowable in advance what ideas the program can never contemplate. And the ideas that the program does contemplate must be chosen by the program itself, using methods, criteria, and objectives that are also the program’s own. Deutsch, D. “Beyond Reward and Punishment” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Deutsch I D. Deutsch Fabric of Reality, Harmondsworth 1997 German Edition: Die Physik der Welterkenntnis München 2000 Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Artificial Intelligence | Omohundro | Brockman I 24 Artificial Intelligence/robots/Omohundro: Problem: intelligent entities must act to preserve their own existence. This tendency has nothing to do with a self-preservation instinct or any other biological notion; it’s just that an entity cannot achieve its objectives if it’s dead. According to Omohundro’s argument, a superintelligent machine that has an off switch - which some, including Alan Turing himself, in a 1951 talk on BBC Radio 3, have seen as our potential salvation - will take steps to disable the switch in some way.(1) Thus we may face the prospect of superintelligent machines - their actions by definition unpredictable by us and their imperfectly specified objectives conflicting with our own - whose motivations to preserve their existence in order to achieve those objectives may be insuperable. Vs: cf. >AI/Hawkins; >AI/Stuart Russell. 1. Omohundro, ‘The Basic AI Drives,” in Proceedings of the First AGI Conference, 171; and in R Wang, B. Goertzel, and S. Franklin, ed.,Artificial General Intelligence (Amsterdam, The Netherlands: lOS Press, 2008). Russell, Stuart J. „The Purpose put into the Machine”, in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Human Rights | Church | Brockman I 242 Robots/human rights/George M. Church: Probably we should be less concerned about us-versus-them and more concerned about the rights of all sentients in the face of an emerging unprecedented diversity of minds. We should be harnessing this diversity to minimize global existential risks, like supervolcanoes and asteroids. Brockman I 243 Very practically, we have to address the ethical rules that should be built in, learned, or probabilistically chosen for increasingly intelligent and diverse machines. We have a whole series of Trolley Problems. At what number of people in line for death should the computer decide to shift a moving trolley to one person? Ultimately this might be a deep-learning problem—one in which huge databases of facts and contingencies can be taken into account, some seemingly far from the ethics at hand. >Trolley Problem/Church. Brockman I 244 Questions that at first seem alien and troubling, like “Who owns the new minds, and who pays for their mistakes?” are similar to well-established laws about who owns and pays for the sins of a corporation. Brockman I 248 Robots/Weizenbaum/Church: In his 1976 book Computer Power and Human Reason(1), Joseph Weizenbaum argued that machines should not replace Homo in situations requiring respect, dignity, or care, while others (author Pamela McCorduck and computer scientists like John McCarthy and Bill Hibbard) replied that machines can be more impartial, calm, and consistent and less abusive or mischievous than people in such positions. George M. ChurchVsJefferson: (…) as we change geographical location and mature, our unequal rights change dramatically. Embryos, infants, children, teens, adults, patients, felons, gender identities and gender preferences, the very rich and very poor—all of these face different Brockman I 249 rights and socioeconomic realities. One path to new mind-types obtaining and retaining rights similar to the most elite humans would be to keep a Homo component, like a human shield or figurehead monarch/CEO, signing blindly enormous technical documents, making snap financial, health, diplomatic, military, or security decisions. >Laws of Robotics/Church, George M. Brockman I 250 Mirror test/self-consciousness: The robots Qbo have passed the “mirror test» for self-recognition and the robots NAO have passed a related test of recognizing their own voice and inferring their internal state of being, mute or not. Free will/computers/Church: For free will, we have algorithms that are neither fully deterministic nor random but aimed at nearly optimal probabilistic decision making. One could argue that this is a practical Darwinian consequence of game theory. For many (not all) games/problems, if we’re totally predictable or totally random, then we tend to lose. Qualia: We could argue as to whether the robot actually experiences subjective qualia for free will or self-consciousness, but the same applies to evaluating a human. How do we know that a sociopath, a coma patient, a person with Williams syndrome, or a baby has the same free will or self-consciousness as our own? And what does it matter, practically? If humans (of any sort) convincingly claim to experience consciousness, pain, faith, happiness, ambition, and/or utility to society, should we deny them rights because their hypothetical qualia are hypothetically different from ours? Brockman I 251 Do transhumans roam the Earth already? Consider the “uncontacted peoples,” such as the Sentinelese and Andamanese of India (…). Brockman I 252 How would they or our ancestors respond? We could define “transhuman” as people and cultures not comprehensible to humans living in a modern, yet untechnological culture. The question “What was a human?” has already transmogrified into “What were the many kinds of transhumans?. . . And what were their rights?” 1. Weizenbaum, J. Computer Power and Human Reason. From Judgment to Calculation. San Francisco: W. H. Freeman, 1976 Church, George M. „The Rights of Machines” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Chur I A. Church The Calculi of Lambda Conversion. (Am-6)(Annals of Mathematics Studies) Princeton 1985 Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Laws of Robotics | Church | Brockman I 249 Laws of robotics/George M. Church: as we change geographical location and mature, our unequal rights change dramatically. Embryos, infants, children, teens, adults, patients, felons, gender identities and gender preferences, the very rich and very poor—all of these face different Brockman I 249 rights and socioeconomic realities. One path to new mind-types obtaining and retaining rights similar to the most elite humans would be to keep a Homo component (…).[This] divide not (…) for intra Homo sapiens variation in rights explodes into a riot of inequality as soon as we move to entities that overlap (or will soon) the spectrum of humanity. Shouldn’t people with prosopagnosia (face blindness) or forgetfulness be able to benefit from facial-recognition software and optical character recognition wherever they go, and if them, then why not everyone? If we all have those tools to some extent, shouldn’t we all be able to benefit? Asimov/Church: Enforced preference for Asimov’s First [do not injure a human being] and Second [obey human orders] Laws favor human minds over any other mind meekly present in his Third Law, of self-preservation. If robots don’t have exactly the same consciousness as humans, then this is used as an excuse to give them different rights, analogous to arguments that other tribes or races are less than human. Do robots already show free will? Are they already self-conscious? Brockman I 250 Mirror test/self-consciousness: The robots Qbo have passed the “mirror test» for self-recognition and the robots NAO have passed a related test of recognizing their own voice and inferring their internal state of being, mute or not. Free will/computers/Church: For free will, we have algorithms that are neither fully deterministic nor random but aimed at nearly optimal probabilistic decision making. One could argue that this is a practical Darwinian consequence of game theory. For many (not all) games/problems, if we’re totally predictable or totally random, then we tend to lose. Qualia: We could argue as to whether the robot actually experiences subjective qualia for free will or self-consciousness, but the same applies to evaluating a human. How do we know that a sociopath, a coma patient, a person with Williams syndrome, or a baby has the same free will or self-consciousness as our own? And what does it matter, practically? If humans (of any sort) convincingly claim to experience consciousness, pain, faith, happiness, ambition, and/or utility to society, should we deny them rights because their hypothetical qualia are hypothetically different from ours? >Robots/Church, >Robot rights/Church. Church, George M. „The Rights of Machines” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Chur I A. Church The Calculi of Lambda Conversion. (Am-6)(Annals of Mathematics Studies) Princeton 1985 Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Local Minimum | Anderson | Brockman I 147 Local minimum problem/local maximum/fitness landscape/Chris Anderson: The limits of gradient descent constitute the so-called local-minima problem (or local-maxima problem, if you’re doing a gradient ascent). >Fitness landscape. (>Local minimum). Solution/Anderson: (…) you either need a mental model (i.e., a map) of the topology, so you know where to ascend to get out of the valley, or you need to switch between gradient descent and random walks so you can bounce your way out of the region. >Robots/Anderson, >Artificial intelligence/Anderson, >Universe/Anderson, >Fitness landscape. Anderson, Chris “Gradient Descent” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Ander I Chris Anderson The Long Tail: Why the Future of Business is Selling Less of More New York 2006 Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Machine Learning | Dennett | Brockman I 48 Machine Learning/Dennett: These machines do not (yet) have the goals or strategies or capacities for self-criticism and innovation to permit them to transcend their databases by reflectively thinking about their own thinking and their own goals. They are, as Wiener(1) says, helpless, not in the sense of being shackled agents or disabled agents but in the sense of not being agents at all - not having the capacity to be “moved by reasons” (as Kant put it) presented to them. Dennett: It is important that we keep it that way, which will take some doing. >Control/George Dyson, >Turing Test/Dennett, >Strong AI/Dennett, >Robots/Dennett. 1. Wiener, N. (1954) The Human Use of Human Beings. Boston: Houghton Mifflin. Dennett, D. “What can we do?”, in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press |
Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 Dennett II D. Dennett Kinds of Minds, New York 1996 German Edition: Spielarten des Geistes Gütersloh 1999 Dennett III Daniel Dennett "COG: Steps towards consciousness in robots" In Bewusstein, Thomas Metzinger Paderborn/München/Wien/Zürich 1996 Dennett IV Daniel Dennett "Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Robots | Anderson | Brockman I 145 Robots/Artificial intelligence/Anderson, Chris: Mosquitoes are closer to plants that follow the sun than to guided missiles. Yet by applying this simple “follow your nose” rule quite literally, they can travel through a house to find you, slip through cracks in a screen door(…) It’s just a random walk, combined with flexible wings and legs that let the insect bounce off obstacles and an instinct to descend a chemical gradient. (…) “gradient descent» is much more than bug navigation. Look around you and you’ll find it everywhere, from the most basic physical ruies of the universe to the most advanced artificial intelligence. >Universe/Anderson, >Artificial intelligence/Anderson, >Local minimum/Anderson. Anderson, Chris “Gradient Descent” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Ander I Chris Anderson The Long Tail: Why the Future of Business is Selling Less of More New York 2006 Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Robots | Church | Brockman I 242 Robots/human rights/George M. Church: Probably we should be less concerned about us-versus-them and more concerned about the rights of all sentients in the face of an emerging unprecedented diversity of minds. We should be harnessing this diversity to minimize global existential risks, like supervolcanoes and asteroids. Brockman I 243 Very practically, we have to address the ethical rules that should be built in, learned, or probabilistically chosen for increasingly intelligent and diverse machines. We have a whole series of Trolley Problems. At what number of people in line for death should the computer decide to shift a moving trolley to one person? Ultimately this might be a deep-learning problem—one in which huge databases of facts and contingencies can be taken into account, some seemingly far from the ethics at hand. >Trolley Problem/Church. Brockman I 244 Questions that at first seem alien and troubling, like “Who owns the new minds, and who pays for their mistakes?” are similar to well-established laws about who owns and pays for the sins of a corporation. Brockman I 248 Robots/Weizenbaum/Church: In his 1976 book Computer Power and Human Reason(1), Joseph Weizenbaum argued that machines should not replace Homo in situations requiring respect, dignity, or care, while others (author Pamela McCorduck and computer scientists like John McCarthy and Bill Hibbard) replied that machines can be more impartial, calm, and consistent and less abusive or mischievous than people in such positions. George M. ChurchVsJefferson: (…) as we change geographical location and mature, our unequal rights change dramatically. Embryos, infants, children, teens, adults, patients, felons, gender identities and gender preferences, the very rich and very poor—all of these face different Brockman I 249 rights and socioeconomic realities. One path to new mind-types obtaining and retaining rights similar to the most elite humans would be to keep a Homo component, like a human shield or figurehead monarch/CEO, signing blindly enormous technical documents, making snap financial, health, diplomatic, military, or security decisions. >Laws of Robotics/Church, George M. Brockman I 250 Mirror test/self-consciousness: The robots Qbo have passed the “mirror test» for self-recognition and the robots NAO have passed a related test of recognizing their own voice and inferring their internal state of being, mute or not. Free will/computers/Church: For free will, we have algorithms that are neither fully deterministic nor random but aimed at nearly optimal probabilistic decision making. One could argue that this is a practical Darwinian consequence of game theory. For many (not all) games/problems, if we’re totally predictable or totally random, then we tend to lose. Qualia: We could argue as to whether the robot actually experiences subjective qualia for free will or self-consciousness, but the same applies to evaluating a human. How do we know that a sociopath, a coma patient, a person with Williams syndrome, or a baby has the same free will or self-consciousness as our own? And what does it matter, practically? If humans (of any sort) convincingly claim to experience consciousness, pain, faith, happiness, ambition, and/or utility to society, should we deny them rights because their hypothetical qualia are hypothetically different from ours? Brockman I 251 Do transhumans roam the Earth already? Consider the “uncontacted peoples,” such as the Sentinelese and Andamanese of India (…). Brockman I 252 How would they or our ancestors respond? We could define “transhuman” as people and cultures not comprehensible to humans living in a modern, yet untechnological culture. The question “What was a human?” has already transmogrified into “What were the many kinds of transhumans?. . . And what were their rights?” 1. Weizenbaum, J. Computer Power and Human Reason. From Judgment to Calculation. San Francisco: W. H. Freeman, 1976 Church, George M. „The Rights of Machines” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Chur I A. Church The Calculi of Lambda Conversion. (Am-6)(Annals of Mathematics Studies) Princeton 1985 Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Robots | Dennett | Brockman I 51 Robots/Dennett: Problem: Robots would not (…) share with us (..) our vulnerability or our mortality. Solution: (…) a robot that could sign a binding contract with you - not as a surrogate for some human owner but on its own. This isn’t a question of getting it to understand the clauses or manipulate a pen on a piece of paper but of having and deserving legal status as a morally responsible agent. The problem for robots who might want to attain such an exalted status is that, like Superman, they are too invulnerable to be able to make a credible promise. If they were to renege, what would happen? What would be the penalty for promise breaking? Being locked in a cell or, more plausibly, dismantled? Brockman I 52 (…) dismantling an AI (either a robot or a bedridden agent like Watson) is not killing it if the information stored in its design and software is preserved. Solution/Dennett: So what we are creating are not - should not be - conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates, no personality (but all sorts of foibles and quirks that would no doubt be identified as the “personality” of the system): boxes of truths (if we’re lucky) almost certainly contaminated with a scattering of falsehoods. >Strong AI/Dennett. Dennett, D. “What can we do?”, in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 Dennett II D. Dennett Kinds of Minds, New York 1996 German Edition: Spielarten des Geistes Gütersloh 1999 Dennett III Daniel Dennett "COG: Steps towards consciousness in robots" In Bewusstein, Thomas Metzinger Paderborn/München/Wien/Zürich 1996 Dennett IV Daniel Dennett "Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Robots | Dragan | Brockman I 136 Robots/Dragan: To enable the robot to decide on which actions to take, we define a reward function (…).The robot gets a high reward when it reaches its destination, and it incurs a small cost every time it moves; this reward function incentivizes the robot to get to the destination as quickly as possible. Given these definitions, a robot’s job is to figure out what actions it should take in order to get the highest cumulative reward. But with increasing AI capability, the problems we want to tackle don’t fit neatly into this framework. We can no longer cut off a tiny piece of the world, put it in a box, and give it to a robot. Helping people is starting to mean working in the real world, where you have to actually interact with people and reason about them. “People” will have to formally enter the AI problem definition somewhere. Brockman I 137 (…) it is ultimately a human who determines what the robot’s reward function should be in the first place. I believe that capable robots that go beyond very narrowly defined tasks will need to understand this to achieve compatibility with humans. This is the value-alignment problem. >Value alignment/Griffiths. Brockman I 139 [The] need to understand human actions and decisions applies to physical and nonphysical robots alike. >Artificial intelligence/Dragan. (…) robots will need accurate (or at least reasonable) predictive models of whatever people might decide to do. Our state definition can’t just include the physical position of humans in the world. Instead, we’ll also need to estimate something internal to people. It is not always just about the robot planning around people; people plan around the robot, too. (…) just as robots need to anticipate what people will do next, people need to do the same with robots. This is why transparency is important. Not only will robots need good mental models of people but people will need good mental models of robots. >Value alignment/Dragan. Brockman I 142 (…) we need to enable robots to reason about us—to see us as something more than obstacles or perfect game players. We need them to take our human nature into account, so that they are well coordinated and well aligned with us. Dragan, Anca, “Putting the Human into the AI Equation” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Robots | Gershenfeld | Brockman I 167 Robots/Gershenfeld: What’s interesting about amino acids is that they’re not interesting. They have attributes that are typical but not unusual, such as attracting or repelling water. But just twenty types of them are enough to make you. In the same way, twenty or so types of digital-material part types - conducting, insulating, rigid, flexible, magnetic, etc. - are enough to assemble the range of functions that go into making modern technologies like robots and computers. By digitizing not just designs but the construction of materials, the same lessons that von Neumann and Shannon taught us apply to exponentially increasing fabricational complexity. >Noise/Shannon, >Symbols/Neumann, >Life, >Computers, >Technology. Gershenfeld, Neil „Scaling”, in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Strong Artificial Intelligence | Dennett | Brockman I 48 Strong Artificial Intelligence/Dennett: [Weizenbaum](1) could never decide which of two theses he wanted to defend: AI is impossible! or AI is possible but evil! He wanted to argue, with John Searle and Roger Penrose, that “Strong AI” is impossible, but there are no good arguments for that conclusion Dennett: As one might expect, the defensible thesis is a hybrid: AI (Strong AI) is possible in principle but not desirable. The AI that’s practically possible is not necessarily evil - unless it is mistaken for Strong AI! E.g. IBM’s Watson: Its victory in Jeopardy! was a genuine triumph, made possible by the formulaic restrictions of the Jeopardy! rules, but in order for it to compete, even these rules had to be revised (…).Watson is not good company, in spite of misleading ads from IBM that suggest a general conversational ability, and turning Watson into a plausibly multidimensional agent would be like turning a hand calculator into Watson. Watson could be a useful core faculty for such an agent, but more like a cerebellum or an amygdala than a mind—at best, a special-purpose subsystem that could play a big supporting role (…). Brockman I 50 One can imagine a sort of inverted Turing Test in which the judge is on trial; until he or she can spot the weaknesses, the overstepped boundaries, the gaps in a system, no license to operate will be issued. The mental training required to achieve certification as a judge will be demanding. Brockman I 51 We don’t need artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights, and should not have feelings that could be hurt, or be able to respond with resentment to “abuses” rained on them by inept users.(2) Rationale/Dennett: [these agents] would not (…) share with us (..) our vulnerability or our mortality. >Robots/Dennett. 1. Weizenbaum, J. Computer Power and Human Reason. From Judgment to Calculation. San Francisco: W. H. Freeman, 1976 2. Joanna J. Bryson, “Robots Should Be Slaves,» in Close Engci.gement with Artificial Companions, YorickWilks, ed. (Amsterdam, The Netherlands: John Benjamins, 2010), 63—74; http:/I www.cs .bath.ac.uk/ —jjb/ftp/Bryson-Slaves-BookO9 .html; Joanna J. Bryson, “Patiency Is Not a Virtue: AI and the Design of Ethical Systems,” https://www.cs.bath.ac.ulc/-jjb/ftp/Bryson Patiency-AAAISS i 6.pdf [inactive]. Dennett, D. “What can we do?”, in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 Dennett II D. Dennett Kinds of Minds, New York 1996 German Edition: Spielarten des Geistes Gütersloh 1999 Dennett III Daniel Dennett "COG: Steps towards consciousness in robots" In Bewusstein, Thomas Metzinger Paderborn/München/Wien/Zürich 1996 Dennett IV Daniel Dennett "Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Understanding | Gärdenfors | I 66 Understanding/Gärdenfors: the understanding of foreign beliefs requires the understanding of the event representations of other persons. Therefore, it develops relatively late in child development. --- I 255 Understanding/Linguistics/Language/Gärdenfors: People understand meanings of words without being aware of them doing so and without knowing the underlying processes. How can we be sure that computers and robots understand language? Understanding/robots/machines/Gärdenfors: two criteria or tests for the understanding of computers or machine operators: A) that they communicate B) that they draw conclusions. |
Gä I P. Gärdenfors The Geometry of Meaning Cambridge 2014 |
Zombies | Chalmers | I 94 Zombies/Robots/Chalmers: zombies and robots are logically possible. There could be a twin of me, who is molecular identical with me, but without inner experience. >Robots, >Experience, >Qualia, >Phenomena, cf. >Artificial Consciousness, >Artificial Intelligence, >Strong AI. I 95 Zombie Identity/Chalmers: The identity between my zombie twin and I will insist on the following levels: 1. Functional: he will process the same information as I do. 2. Psychological: he will show the same behavior. Phenomenal: the zombie will not be identical with me: he will not have the same inner experiences. I 96 Zombies/Chalmers: it is not a matter of whether the assumption of their existence is plausible, but whether it is conceptually incoherent. In any case, there are no hidden conceptual contradictions. >Analyticity. I 97 Conceivability: since such a zombie is not conceptually excluded, it follows that my conscious experience does not logically follow from the functional constitution of my organism. >Conceivability/Chalmers. Conclusion: (phenomenal) consciousness does not supervene logically on the physical. >Consciousness/Chalmers. I 131 Zombies/Necessity a posteriori/VsChalmers: one could argue that a zombie world would be merely logical, but not metaphysically possible. There is also a distinction between conceivability and true possibility. >Necessity a posteriori, >Metaphysical possibility. Necessary a posteriori/Kripke: For example, that water is H2O, this necessity is only a posteriori knowable. Then it is logical, but not metaphysically possible, that water is not H2O. VsChalmers: it was unnatural to assume the same for zombies, and that would be enough to save materialism. ChalmersVsVs: the notion of necessity a posteriori cannot bear the burden of this argument and is only a distraction maneuver. ((s) It is not brought into play by Kripke himself). I 132 ChalmersVsVs: the argument against me would only have a prospect of success if we had used primary intensions (e.g. water and H2O), but we are dealing with secondary intensions (e.g. water and "wateriness"). Therefore, psychological/physical concepts a posteriori could pick out other things than what would correspond to the a priori distinction. I 180 Zombie/Behavior/Explanation/Chalmers: since the relationships within my zombie twin are the exact reflection of my inner being, any explanation of his behavior will also count as an explanation of my behavior. It follows that the explanation of my assertions about consciousness is just as independent of the existence of consciousness as the explanation of the assertions of the zombies. My zombie twin can adopt this argumentation, and complain about me as a zombie. It can mirror the whole situation. |
Cha I D. Chalmers The Conscious Mind Oxford New York 1996 Cha II D. Chalmers Constructing the World Oxford 2014 |
![]() |
Disputed term/author/ism | Author Vs Author![]() |
Entry![]() |
Reference![]() |
---|---|---|---|
Vitalism | Dennett Vs Vitalism | Metz II 691 VsArtificial Consciousness/VsRobots/Dennett: Traditional ArgumentsVsArtificial Intelligence: 1) Robots are purely physical objects, while something immaterial is required for consciousness. DennettVs: That is Cartesian dualism. II 692 2) Robots are not organic, consciousness can only exist in organic brains. (Vitalism) DennettVsVitalism: Is deservedly dead, since the biochemistry showed that the properties in all organic compounds can be mechanistically reduced and therefore are also reproducible at any scale in another physical medium. 3) Robots are artifacts and only something natural, born may have consciousness. (Chauvinism of origin). DennettVsChauvinism of Origin/Forgery/Dennett: II 694 E.g. A fake cheap wine can also be a good wine if experts consider it good. E.g. A fake Cézanne is also a good picture, if "experts" consider it good. Dennett: but these distinctions represent a dangerous nonsense if they refer to alleged "intrinsic properties". (That means that the molecules would still needed the consecrations of a befitting birth; that would be racism). (By the way, the robot COG passes through a childhood period of learning). Forgery/Dennett: Whether a fake is produced artificially atom by atom, (but in the same molecule compounds) may have legal consequences with respect to a clone that does not deserve the same punishment. II 695 Dennett: E.g. The movie "Schindler’s List" could in principle be produced artificially through computer animation, because it only consists of two-dimensional gray tones on the screen. II 696 4) Robots will always be too simple to have consciousness. Dennett: this is the only acceptable argument, even if we try to refute it. The human body consists of trillions of individual parts. But wherever one looks, one discovers functional similarities at higher levels that allow us to replace hellishly complex modules with relatively simple ones. II 697 There is no reason to believe that any part of the brain could not be substituted. Robots/Dennett: Robot enthusiasts who believe it is easy to construct a conscious robot reveal an infantile understanding of the real world with the intricacies of consciousness. |
Dennett I D. Dennett Darwin’s Dangerous Idea, New York 1995 German Edition: Darwins gefährliches Erbe Hamburg 1997 Dennett II D. Dennett Kinds of Minds, New York 1996 German Edition: Spielarten des Geistes Gütersloh 1999 Dennett III Daniel Dennett "COG: Steps towards consciousness in robots" In Bewusstein, Thomas Metzinger Paderborn/München/Wien/Zürich 1996 Dennett IV Daniel Dennett "Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350 In Der Geist der Tiere, D Perler/M. Wild Frankfurt/M. 2005 |
![]() |