Marvin Minsky on Software Agents - Dictionary of Arguments
Software-agents/exploitation/Minsky: How could any specialist cooperate when it doesn't understand how the others work? We manage to do our worldly work despite that same predicament; we deal with people and machines without knowing how their insides work. It's just the same inside the head; each part of the mind exploits the rest, not knowing how the other parts work but only what they seem to do.
Suppose [an agent called] Thirst knows that water can be found in cups — but does not know how to find or reach for a cup; these are things only [agents called] Find and Get can do. Then Thirst must have some way to exploit the abilities of those other agents.
Problem: most of [the] subagents cannot communicate directly with one another. >Society of Minds/Minsky.
No higher-level agency could ever achieve a complex goal if it had to be concerned with every small detail of what each nerve and muscle does. Unless most of its work were done by other agencies, no part of a society could do anything significant.
Software-Agents/Minsky: What happens when a single agent sends messages to several different agencies? In many cases, such a message will have a different effect on each of those other agencies. Polyneme: (…) I'll call such an agent a polyneme. For example, your word-agent for the word apple must be a polyneme because it sets your agencies for color, shape, and size into unrelated states that represent the independent properties of being red, round, and apple-sized.
But how could the same message come to have such diverse effects on so many agencies, with each effect so specifically appropriate to the idea of apple? There is only one explanation: Each of those other agencies must already have learned its own response to that same signal. Because polynemes, like politicians, mean different things to different listeners, each listener must learn its own, different way to react to that message.
To understand a polyneme, each agency must learn its own specific and appropriate response. Each agency must have its private dictionary or memory bank to tell it how to respond to every polyneme.
Realization/recognizers: When we see an apple, how do we know it as an apple? We can use AND-agents to do many kinds of recognition, but the idea also has serious limitations.
Relevance: There are important variations on the theme of weighing evidence. Our first idea was just to count the bits of evidence in favor of an object's being a chair.
Problem: But not all bits of evidence are equally valuable, so we can improve our scheme by giving different weights to different kinds of evidence.
Evidence/Rosenblatt: In 1959, Frank Rosenblatt invented an ingenious evidence-weighing machine called a Perceptron. It was equipped with a procedure that automatically learned which weights to use from being told by a teacher which of the distinctions it made were unacceptable.
Problem: MinskyVsRosenblatt/PapertVsRosenblatt: in the book Perceptrons, Seymour Papert and I proved mathematically that no feature-weighing machine can distinguish between the two kinds of patterns [one with connected, the other with disconnected lines]. ((s) > http://aurellem.org/society-of-mind/som-19.7.html, 27.04.2020).
Relevance: If we changed the values of those evidence weights, this would produce new recognizer-agents. For example, with a negative weight for back, the new agent would reject chairs but would accept benches, stools, or tables._____________Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments The note [Author1]Vs[Author2] or [Author]Vs[term] is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.
The Society of Mind New York 1985
Semantic Information Processing Cambridge, MA 2003