next up previous index
Next: Discussion Up: The control of understanding Previous: The interest restriction

Ontological constraints

    The main criterion which acts to constrain the understanding process performed by reasoning is the ontology. As understanding progresses, it is often the case that a concept being manipulated will need to shift itself out of the ontological grid cell it is initially in. A vertical or horizontal transition requires more effort than remaining in the same cell; transitioning both vertically and horizontally is even more difficult. Also, there are a small set of heuristics   related to the ontology which serves to bound understanding     (see [#!read:moorman4!#]):  

By combining the three basic movement types with the high-level heuristics, I have produced an ordering of the amount of cognitive effort required to manipulate concepts (from easiest to most difficult):

1.
Concepts may transition within a single cell.
2.
Agents may be treated as objects and objects may be treated as agents.
3.
Concepts may vertically transition according to the modification heuristics.
4.
Mental, emotional, and social concepts may horizontally transition between those domains easier than into the other domains.
5.
Physical domain concepts may transition to other domains (horizontal motion).
6.
Other domain types may transition to the physical domain (horizontal motion).
7.
Combinations of 2-5 may occur.
Within this ordering, however, operations which result in the minimal changes are preferred over those which are more complex.

By making use of this set of ontological rules, it is possible to allow the   creative understanding process to execute without appealing to other higher-level heuristics which may have to be tailored to fit a particular situation. Thus,   THE ONTOLOGICAL REQUIREMENTS AND THE READING DOMAIN PROVIDE SUFFICIENT CONSTRAINTS ON THE UNDERSTANDING PROCESS. The ontological bounding process also allows me to operationalize the earlier idea of   suspension of disbelief. The ``level'' of the disbelief suspension which is required can be viewed in terms of how much ontological transitioning is required to cause a story to ``fit'' into the background knowledge of the reader. More complex transitions mean that more suspension of disbelief is required. As some transitions are too severe to be allowed, this also captures the fact that there is a limit to how much a reasoner is willing to sacrifice belief.

These ideas can be seen by viewing some examples. First, consider the example sentence, John was a bear. The sentence has a number of possible interpretations, as described in Chapter 4:

Some of these are more probable than others; from an ontological perspective, if John is simply the name of a particular bear, then no movement in the ontological grid will be required. This is the version that ISAAC prefers, if no additional information from the story is provided.[*]

    A second example comes from the Meta-AQUA system ([#!learn:cox1!#]) which reads a story involving a drug-sniffing dog. Meta-AQUA initially knows only that dogs will bark at agents which threaten them. But, in the story, a dog is barking at a suitcase. In ISAAC, the system is presented with two possibilities--its knowledge of dogs is wrong or its knowledge of suitcases is. The first involves altering an existing physical agent to create a variant of it, an intracellular movement. The second involves shifting a physical object to the physical agent cell, a vertical movement. The intracellular movement is preferred. Therefore, my system would prefer the understanding that dogs also bark at drugs.

  In the story Men Are Different, a robotic archaeologist is studying the destroyed civilization of mankind; the story is presented as a first-person narrative. ISAAC is aware that narrators, archaeologists, and protagonists are all known to be human; robots are industrial tools; but the narrator, archaeologist, and protagonist of the story is known to be a robot. ISAAC can select to create a new type of robot which embodies agent-like aspects, or it can change the definitions of narrators, archaeologists, protagonists, and the actions in which they may participate. Creating a single new robot concept, therefore, represents a more minimal change than having to alter the definitions of all the other concepts. As a result of the minimal change heuristic, then, the use of base-constructive analogy to create the intelligent robot concept is the preferred option.

  The final example involves the story Zoo, in which the reader is presented with an intergalactic zoo which travels from planet to planet, giving the inhabitants of those planets a chance to view exotic creatures. At the end of story, however, the reader is shown that the true nature of the intergalactic ship--it is an opportunity for the ``creatures'' on the ship to visit exotic planets, protected from the dangerous inhabitants by the cages they are in. To understand the new zoo, the system draws an analogy between the known zoo and the novel one. The result, then, is simply a shift from one physical object to another physical one.


next up previous index
Next: Discussion Up: The control of understanding Previous: The interest restriction
Kenneth Moorman
11/4/1997