Reasoning with Uncertain Knowledge
A. Julian Craddock and Roger A. Browse
Abstract
Heuristics derived from human experimentation are integrated with a knowledge network to
produce a model for reasoning with uncertain information. The believability of knowledge is
determined by collecting reasons for believing and not believing it. These reasons, or
endorsements, are subsequently ordered by their belief, the reliability of their belief, and their
importance relative to one another.
Introduction
The development of mechanisms for the representation of knowledge has always been a central
concern of artificial intelligence. The fundamental criteria for representational schemes have
been adapted from criteria mathematics has set up for logical formalisms: (1) that a translation
into the representation from natural language statements must be possible, and (2) that deduction
and inference of a sort that yields results similar to human conclusions must be possible over the
representation. These criteria are not well met by knowledge representation schemes which are
based on traditional mathematical logic. In particular human expression is characterized by the
use of measures of uncertainty, and human reasoning often appears to not follow the dictates of
logic and probability (Lindley, 1971).
There are several solutions for the problems of uncertain information. The first is what Cohen
(1983) called the "engineering solution". This solution is used by many models in artificial
intelligence (McDermott and Doyle 1980; McDermott 1980; McCarthy 1979; Reiter 1980). The
solution does not deal with uncertainty as a useful source of information and constraints; instead
it reduces the problem domain in such a manner as to eliminate uncertainty. Unfortunately
eliminating uncertainty results in a reformulated problem that is, at the best, only vaguely related
to the original. The second solution is to make quantitative assumptions about uncertainty using
probabilities and possibilities (Zadeh 1983, 1984; Lee 1969; Edwards 1982; Shortcliffe 1975).
The quantitative assumptions often prove to be overly restrictive and lacking in expressive
power. The third solution is to use a utility based solution (Nosteller and Nogee 1951;
Schoemaker 1980). This solution makes the unpleasant assumption that we can determine
subjective utilities for events and manipulate these utilities in a formal manner. Also, there are no
clear indications that humans attempt to maximize their expected utilities while reasoning.
A more promising approach to the role of uncertainty in human reasoning is presented by
Kahneman and Tversky (1982a b). Their model indicates that humans employ a set of basic
heuristics which aid in making decisions in conditions of uncertainty. These heuristics enable
humans to constrain problem domains such that the uncertainty becomes manageable but still
useful. Once these heuristics are recognized as a part of human reasoning it no longer appears
illogical in the sense of being erratic, but rather more pragmatic and difficult to specify in terms
of the logical inference mechanisms of traditional logic. Humans simplify decision making
situations and use mental shortcuts to reach solutions which are satisfactory within constraints,
but not necessarily optimal with respect to formal mathematical theory. Kahneman and Tversky
(ibid) provide numerous examples in which subjects reach decisions which run counter to those
reached by mathematical theories.
The research reported in this paper pursues the problem of developing representational and
inference mechanisms for talking about human reasoning under conditions of uncertainty. The
direction we have taken is based on the belief that methods which model the way people think
under uncertainty may be used in the construction of flexible and more understandable
computational reasoning systems. The model developed here involves collecting reasons for
believing or disbelieving propositions as Cohen (1983) does in his model of endorsement, and
than qualifying these reasons by a measure of belief. In the addition the belief measures can have
varying degrees of certainty. The belief and certainty values can be used: (1) to determine how
supportive a body of evidence for a particular hypothesis is and (2) to represent evidential
relationships such as conflicts between decisions (Craddock 1986) .