Philosophy Dictionary of Arguments

Home Screenshot Tabelle Begriffe

 
Uncertainty: Uncertainty refers to a lack of sureness or predictability about an event, outcome, or situation, stemming from insufficient information, ambiguity, or complexity. See also Certainty, Knowledge, Complexity, Predictions.
_____________
Annotation: The above characterizations of concepts are neither definitions nor exhausting presentations of problems related to them. Instead, they are intended to give a short introduction to the contributions below. – Lexicon of Arguments.

 
Author Concept Summary/Quotes Sources

AI Research on Uncertainty - Dictionary of Arguments

Norvig I 480
Uncertainty/AI research/Norvig/Russell: Agents may need to handle uncertainty, whether due to partial observability, nondeterminism, or a combination of the two. An agent may never know for certain what state it’s in or where it will end up after a sequence of actions.
Solution: handle uncertainty by keeping track of a belief state—a representation of the set of all possible world states that it might be in—and generating a contingency plan that handles every possible eventuality that its sensors may report during execution.
Problems: a) When interpreting partial sensor information, a logical agent must consider every logically possible explanation for the observations, no matter how unlikely. This leads to impossible large and complex belief-state representations.
b) A correct contingent plan that handles every eventuality can grow arbitrarily large and must consider arbitrarily unlikely contingencies.
c) Sometimes there is no plan that is guaranteed to achieve the goal—yet the agent must act. It must have some way to compare the merits of plans that are not guaranteed.
Norvig I 505
Uncertainty: Cox’s theorem (1946)(1) shows that any system for uncertain reasoning that meets his set of assumptions is equivalent to probability theory. This gave renewed confidence to those who already favored probability, but others were not convinced, pointing to the assumptions (primarily that belief must be represented by a single number, and thus the belief in ¬p must be a function of the belief in p). Halpern (1999)(2) describes the assumptions and shows some gaps in Cox’s original formulation. Horn (2003)(3) shows how to patch up the difficulties. Jaynes (2003(4)) has a similar argument that is easier to read. The question of reference classes is closely tied to the attempt to find an inductive logic. >Open Universe/AI research
, >Bayesian Networks/Norvig.
Norvig I 547
A. Default reasoning: treats conclusions not as “believed to a certain degree,” but as “believed until a better reason is found to believe something else.”
B. Rule-based approaches: (…) hope to build on the success of logical rule-based systems, but add a sort of “fudge factor” to each rule to accommodate uncertainty. These methods were developed in the mid-1970s and formed the basis for a large number of expert systems in medicine and other areas.
VsRule-based reasoning: problems: 1. Non-locality: in probabilistic systems we need to consider all the evidence.2. Detachment: In dealing with probabilities, (…) the source of the evidence for a belief is important for subsequent reasoning. 3. No truth-functionality: in probability combination the truth of complex sentences cannot always be computed from the truth of the components.
C. Dempster-Shafer theory: uses interval-valued degrees of belief to represent an agent’s knowledge of the probability of a proposition. >Dempster-Shafer theory/Norvig.
D. Vagueness/Fuzzy logic: Probability makes the same ontological commitment as logic: that propositions are true or false in the world, even if the agent is uncertain as to which is the case. Researchers in fuzzy logic have proposed an ontology that allows vagueness: that a proposition can be “sort of” true. Vagueness and uncertainty are in fact orthogonal issues.

1. Cox, R. T. (1946). Probability, frequency, and reasonable expectation. American Journal of Physics,
14(1), 1–13.
2. Halpern, J. Y. (1999). Technical addendum, Cox’s theorem revisited. JAIR, 11, 429–435.
3. Horn, K. V. (2003). Constructing a logic of plausible inference: A guide to cox’s theorem. IJAR, 34,
3–24.
4. Jaynes, E. T. (2003). Probability Theory: The Logic of Science. Cambridge Univ. Press.

_____________
Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments
The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.
AI Research
Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010


Send Link
> Counter arguments against AI Research
> Counter arguments in relation to Uncertainty

Authors A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Y   Z  


Concepts A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  



Ed. Martin Schulz, access date 2024-04-16
Legal Notice   Contact   Data protection declaration