[TOC]

  1. Title: Intention Is Choice With Commitment
  2. Author: Philip Cohen et. al.
  3. Publish Year: 1990
  4. Review Date: Tue, Jan 30, 2024
  5. url: https://www.sciencedirect.com/science/article/pii/0004370290900555

Summary of paper

Contribution

This paper delves into the principles governing the rational balance between an agent’s beliefs, goals, actions, and intentions, offering valuable insights for both artificial agents and a theory of human action. It focuses on clarifying when an agent can abandon their goals and how strongly they are committed to these goals. The formalism used in the paper captures several crucial aspects of intentions, including an analysis of Bratman’s three characteristic functional roles of intentions and how agents can avoid intending all the unintended consequences of their actions. Furthermore, the paper discusses how intentions can be shaped based on an agent’s relevant beliefs and other intentions or goals. It also introduces a preliminary concept of interpersonal commitments by relating one agent’s intentions to their beliefs about another agent’s intentions or beliefs.

Some key terms

lack of commitment vs overcommitment in AI agent

Logic for affect mental state of users as well as agents

Intention in planning agent

If asked, the designer of a planning system may say that the notion of intention is defined operationally: A planning system’s intentions are no more than the contents of its plans. As such, intentions are representations of possible actions the system may take to achieve its goal(s).

Intention in philosophical theory

intention vs belief

Bratman’s three functional roles for intention

  1. Intentions normally pose problems for the agent; the agent needs to determine a way to achieve them.
  2. Intentions serve as a “screen of admissibility” for the adoption of other intentions. Unlike desires, which can be inconsistent, agents typically do not adopt intentions that they believe conflict with their existing present- and future-directed intentions. For instance, if an agent intends to hardboil an egg and is aware that they have only one egg and cannot obtain more in time, it would be irrational for them to simultaneously intend to make an omelette, as these two intentions are in conflict with each other.
  3. Agents “track” the success of their attempts to achieve their intentions. Not only do agents care whether their attempts succeed, but they are disposed to replan to achieve the intended effects if earlier attempts fail.

not giving up too soon

intention as a composite concept

Persistent goal

idealisation setting

Summary