Disputed term/author/ism | Author |
Entry |
Reference |
---|---|---|---|
Ethics | Bostrom | I 257 Ethics/morals/morality/superintelligence//Bostrom: No ethical theory commands majority support among philosophers, so most philosophers must be wrong. ((s)VsBostrom: It is not a question of applause as to which theory is correct.) I 369 Majorities in ethics/Bostrom: A recent canvass of professional philosophers found the percentage of respondents who “accept or leans toward” various positions. On normative ethics, the results were deontology 25.9%; - consequentialism 23.6%; - virtue ethics 18.2%. On metaethics, results were moral realism 56.4%; - moral anti-realism 27.7%. On moral judgment: cognitivism 65.7%; - non-cognitivism 17.0% (Bourget and Chalmers 2009(1)) >Norms/normativity/superintelligence/Bostrom, >Ethics/superintelligence/Yudkowsky. Morality models: I 259 Coherent Extrapolated Volition/CEV/Yudkowsky: Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted. >Ethics/superintelligence/Yudkowsky. I 266 VsCEV/Bostrom: instead: Moral rightness/MR/Bostrom: (…) build an AI with the goal of doing what is morally right, relying on the AI’s superior cognitive capacities to figure out just which actions fit that description. We can call this proposal “moral rightness” (MR). The idea is that we humans have an imperfect understanding of what is right and wrong (…) ((s)VsBostrom: This delegates human responsibility and ultimately assumes that human decisions are only provisional until non-human decisions are made.) I 267 BostromVsYudkowsky: MR would do away with various free parameters in CEV, such as the degree of coherence among extrapolated volitions that is required for the AI to act on the result, the ease with which a majority can overrule dissenting minorities, and the nature of the social environment within which our extrapolated selves are to be supposed to have “grown up farther together.” BostromVsMR: Problem: 1. MR would also appear to have some disadvantages. It relies on the notion of “morally right,” a notoriously difficult concept (…). I 268 2. (…) [MR] might not give us what we want or what we would choose if we were brighter and better informed. Solution/Bostrom: Goal for AI: MP: Among the actions that are morally permissible for the AI, take one that humanity’s CEV would prefer. However, if some part of this instruction has no well-specified meaning, or if we are radically confused about its meaning, or if moral realism is false, or if we acted morally impermissibly in creating an AI with this goal, then undergo a controlled shutdown.(*) Follow the intended meaning of this instruction. I 373 (Annotation) *Moral permissibility/Bostrom: When the AI evaluates the moral permissibility of our act of creating the AI, it should interpret permissibility in its objective sense. In one ordinary sense of “morally permissible,” a doctor acts morally permissibly when she prescribes a drug she believes will cure her patient - even if the patient, unbeknownst to the doctor, is allergic to the drug and dies as a result. Focusing on objective moral permissibility takes advantage of the presumably superior epistemic position of the AI. ((s)VsBostrom: The last sentence (severability) is circular, especially when there are no longer individuals in decision-making positions who could object to it. >Goals/superintelligence/Bostrom. I 312 Def Common good principle/Bostrom: Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals. I 380 This formulation is intended to be read so as to include a prescription that the well-being of nonhuman animals and other sentient beings (including digital minds) that exist or may come to exist be given due consideration. It is not meant to be read as a license for one AI developer to substitute his or her own moral intuitions for those of the wider moral community. 1. Bourget, David, and Chalmers, David. 2009. “The PhilPapers Surveys.” November. Available at http://philpapers.org/surveys/ |
Bostrom I Nick Bostrom Superintelligence. Paths, Dangers, Strategies Oxford: Oxford University Press 2017 |
Ethics | Yudkowsky | Bostrom I 259 Ethics/morality/superintelligence/Yudkowsky: Yudkowsky has proposed that a seed AI be given the final goal of carrying out humanity’s “coherent extrapolated volition” (CEV), which he defines as follows: Def CEV/Yudkowsky: Our Coherent Extrapolated Volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted. ((s)VsYudkowsky: (1) Here the tacit assumption is made that moral decisions are subject to progress. (2) Moral decisions should not be made dependent on majorities. (3) The demand for convergence of communities ignores the right to individual autonomy.) >Ethics/superintelligence/Bostrom. |
Bostrom I Nick Bostrom Superintelligence. Paths, Dangers, Strategies Oxford: Oxford University Press 2017 |