[3 Response by Georgeff] -------------------------------------------------------------------------------------------------- Let us first consider so-called Beliefs. In AI terms, Beliefs represent knowledge of the world. However, in computational terms, Beliefs are just some way of representing the state of the world, be it as the value of a variable, a relational database, or symbolic expressions in predicate calculus. Desires (or, more commonly though somewhat loosely, Goals) form another essential component of system state. Again, in computational terms, a Goal may simply be the value of a variable, a record structure, or a symbolic expression in some logic. The important point is that a Goal represents some desired end state. Conventional computer software is htask orientedh rather than hgoal orientedh; that is, each task (or subroutine) is executed without any memory of why it is being executed. This means that the system cannot automatically recover from failures (unless this is explicitly coded by the programmer) and cannot discover and make use of opportunities as they unexpectedly present themselves. These committed plans or procedures are called, in the AI literature, Intentions, and represent the third necessary component of system state. Computationally, Intentions may simply be a set of executing threads in a process that can be appropriately interrupted upon receiving feedback from the possibly changing world. [4 Response by Pollack] -------------------------------------------------------------------------------------------------- It has been generally accepted for many years that agents cannot possibly perform optimizations over the space of all possible courses of action [17]. Bratmanfs Claim is aimed precisely at helping reduce that space to make the required reasoning feasible. [3 Response by Georgeff]