Philosophy Dictionary of Arguments

Home Screenshot Tabelle Begriffe

 
Online search: Online search is the process of finding information on the internet. It is done using search engines, which are websites that allow users to enter keywords or phrases and then return a list of relevant websites. See also Search engines, Internet, Social Media, Google.
_____________
Annotation: The above characterizations of concepts are neither definitions nor exhausting presentations of problems related to them. Instead, they are intended to give a short introduction to the contributions below. – Lexicon of Arguments.

 
Author Concept Summary/Quotes Sources

Peter Norvig on Online Search - Dictionary of Arguments

Norvig I 147
Online search/Norvig/Russell: The term “online” is commonly used in computer science to refer to algorithms that must process input data as they are received rather than waiting for the entire input data set to become available. >Search algorithms/Norvig
.
Offline algorithms: compute a complete solution before setting foot in the real world and then execute the solution.
Online search agents: an online search agent interleaves computation and action: first it takes an action, then it observes the environment and computes the next action. Online search is a good idea in dynamic or semi-dynamic domains - domains where there is a penalty for sitting around and computing too long. Online search is also helpful in nondeterministic domains because it allows the agent to focus its computational efforts on the contingencies that actually arise rather than those that might happen but probably won’t. Online search is a necessary idea for unknown environments. The agent faces an exploration problem and must use its actions as experiments in order to learn enough to make deliberation worthwhile. An online search problem must be solved by an agent executing actions, rather than by pure computation. E.g., the agent cannot determine RESULT(s, a) except by actually being in s and doing a.
Norvig I 148
Competitive ratio: Typically, the agent’s objective is to reach a goal state while minimizing cost. (Another possible objective is simply to explore the entire environment.) The cost is the total path cost of the path that the agent actually travels. It is common to compare this cost with the path cost of the path the agent would follow if it knew the search space in advance—that is, the actual shortest path (or shortest complete exploration). In the language of online algorithms, this is called the competitive ratio; we would like it to be as small as possible.
Norvig I 149
Environment: After each action, an online agent receives a percept telling it what state it has reached; from this information, it can augment its map of the environment. The current map is used to decide where to go next. (See below random walk.)
Offline search agents: can expand a node in one part of the space and then immediately expand a node in another part of the space, because node expansion involves simulated rather than real actions.
An online algorithm, on the other hand, can discover successors only for a node that it physically occupies. To avoid traveling all the way across the tree to expand the next node, it seems better to expand nodes in a local order. Depth-first search (>Search algorithms/Norvig) has exactly this property because (except when >backtracking) the next node expanded is a child of the previous node expanded. (>Genetic algorithms/Norvig).
Norvig I 150
Hill-climbing search: Like depth-first search, hill-climbing search has the property of locality in its node expansions. (…) because it keeps just one current state in memory, hill-climbing search is already an online search algorithm! Unfortunately, it is not very useful in its simplest form because it leaves the agent sitting at local maxima with nowhere to go. Moreover, random restarts cannot be used, because the agent cannot transport itself to a new state.
Random walk: Instead of random restarts, one might consider using a random walk to explore the environment. A random walk simply selects at random one of the available actions from the current state.
Norvig I 153
Learning: 1st. the agents learn a “map” of the environment—more precisely, the outcome of each action in each state—simply by recording each of their experiences.
2nd. the local search agents acquire more accurate estimates of the cost of each state by using local updating rules, as in LRTA∗.

_____________
Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments
The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010


Send Link
> Counter arguments against Norvig
> Counter arguments in relation to Online Search

Authors A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Y   Z  


Concepts A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  



Ed. Martin Schulz, access date 2024-04-24
Legal Notice   Contact   Data protection declaration