Philosophy Dictionary of Arguments

Home Screenshot Tabelle Begriffe

 
Software agents: Software agents are autonomous programs that perform tasks or make decisions. They can adapt, learn, and interact with their environment to achieve specific goals. See also Software, Computer programming, Computers, Artificial Intelligence, Machine Learning.
_____________
Annotation: The above characterizations of concepts are neither definitions nor exhausting presentations of problems related to them. Instead, they are intended to give a short introduction to the contributions below. – Lexicon of Arguments.

 
Author Concept Summary/Quotes Sources

Marvin Minsky on Software Agents - Dictionary of Arguments

I 169
Software-agents/exploitation/Minsky: How could any specialist cooperate when it doesn't understand how the others work? We manage to do our worldly work despite that same predicament; we deal with people and machines without knowing how their insides work. It's just the same inside the head; each part of the mind exploits the rest, not knowing how the other parts work but only what they seem to do.
Suppose [an agent called] Thirst knows that water can be found in cups — but does not know how to find or reach for a cup; these are things only [agents called] Find and Get can do. Then Thirst must have some way to exploit the abilities of those other agents.
Problem: most of [the] subagents cannot communicate directly with one another. >Society of Minds/Minsky
.
No higher-level agency could ever achieve a complex goal if it had to be concerned with every small detail of what each nerve and muscle does. Unless most of its work were done by other agencies, no part of a society could do anything significant.
I 200
Software-Agents/Minsky: What happens when a single agent sends messages to several different agencies? In many cases, such a message will have a different effect on each of those other agencies. Polyneme: (…) I'll call such an agent a polyneme. For example, your word-agent for the word apple must be a polyneme because it sets your agencies for color, shape, and size into unrelated states that represent the independent properties of being red, round, and apple-sized.
But how could the same message come to have such diverse effects on so many agencies, with each effect so specifically appropriate to the idea of apple? There is only one explanation: Each of those other agencies must already have learned its own response to that same signal. Because polynemes, like politicians, mean different things to different listeners, each listener must learn its own, different way to react to that message.
I 201
To understand a polyneme, each agency must learn its own specific and appropriate response. Each agency must have its private dictionary or memory bank to tell it how to respond to every polyneme.
>Frames/Minsky, >Terminology/Minsky.
Realization/recognizers: When we see an apple, how do we know it as an apple? We can use AND-agents to do many kinds of recognition, but the idea also has serious limitations.
I 202
Relevance: There are important variations on the theme of weighing evidence. Our first idea was just to count the bits of evidence in favor of an object's being a chair.
Problem: But not all bits of evidence are equally valuable, so we can improve our scheme by giving different weights to different kinds of evidence.
Solution:
Evidence/Rosenblatt: In 1959, Frank Rosenblatt invented an ingenious evidence-weighing machine called a Perceptron. It was equipped with a procedure that automatically learned which weights to use from being told by a teacher which of the distinctions it made were unacceptable.
Problem: MinskyVsRosenblatt/PapertVsRosenblatt: in the book Perceptrons, Seymour Papert and I proved mathematically that no feature-weighing machine can distinguish between the two kinds of patterns [one with connected, the other with disconnected lines]. ((s) > http://aurellem.org/society-of-mind/som-19.7.html, 27.04.2020).
I 203
Relevance: If we changed the values of those evidence weights, this would produce new recognizer-agents. For example, with a negative weight for back, the new agent would reject chairs but would accept benches, stools, or tables.
>Neural Networks, >Frame Theories, >Artificial Neural Networks.

_____________
Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments
The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.

Minsky I
Marvin Minsky
The Society of Mind New York 1985

Minsky II
Marvin Minsky
Semantic Information Processing Cambridge, MA 2003


Send Link
> Counter arguments against Minsky
> Counter arguments in relation to Software Agents

Authors A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Y   Z  


Concepts A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  



Ed. Martin Schulz, access date 2024-04-19
Legal Notice   Contact   Data protection declaration