Philosophy Dictionary of Arguments

Home Screenshot Tabelle Begriffe

 
Author Concept Summary/Quotes Sources

Anca Dragan on Robots - Dictionary of Arguments

Brockman I 136
Robots/Dragan: To enable the robot to decide on which actions to take, we define a reward function (…).The robot gets a high reward when it reaches its destination, and it incurs a small cost every time it moves; this reward function incentivizes the robot to get to the destination as quickly as possible. Given these definitions, a robot’s job is to figure out what actions it should take in order to get the highest cumulative reward.
But with increasing AI capability, the problems we want to tackle don’t fit neatly into this framework. We can no longer cut off a tiny piece of the world, put it in a box, and give it to a robot. Helping people is starting to mean working in the real world, where you have to actually interact with people and reason about them. “People” will have to formally enter the AI problem definition somewhere.
Brockman I 137
(…) it is ultimately a human who determines what the robot’s reward function should be in the first place. I believe that capable robots that go beyond very narrowly defined tasks will need to understand this to achieve compatibility with humans. This is the value-alignment problem. >Value alignment/Griffiths
.
Brockman I 139
[The] need to understand human actions and decisions applies to physical and nonphysical robots alike. >Artificial intelligence/Dragan.
(…) robots will need accurate (or at least reasonable) predictive models of whatever people might decide to do. Our state definition can’t just include the physical position of humans in the world. Instead, we’ll also need to estimate something internal to people.
It is not always just about the robot planning around people; people plan around the robot, too.
(…) just as robots need to anticipate what people will do next, people need to do the same with robots. This is why transparency is important. Not only will robots need good mental models of people but people will need good mental models of robots. >Value alignment/Dragan.
Brockman I 142
(…) we need to enable robots to reason about us—to see us as something more than obstacles or perfect game players. We need them to take our human nature into account, so that they are well coordinated and well aligned with us.


Dragan, Anca, “Putting the Human into the AI Equation” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press.

_____________
Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments
The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.
Dragan, Anca
Brockman I
John Brockman
Possible Minds: Twenty-Five Ways of Looking at AI New York 2019


Send Link
> Counter arguments against Dragan
> Counter arguments in relation to Robots

Authors A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Y   Z  


Concepts A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  



Ed. Martin Schulz, access date 2024-04-20
Legal Notice   Contact   Data protection declaration