[TOC]

  1. Title: Robots That Ask for Help: Uncertainty Alignment for Large Language Model Planners
  2. Author: Allen Z. Ren et. al.
  3. Publish Year: 4 Sep 2023
  4. Review Date: Fri, Jan 26, 2024
  5. url: arXiv:2307.01928v2

Summary of paper

image-20240127222901220

Motivation

Contribution

Some key terms

Ambiguity in NL

The issue of extensive prompting

Formalize the challenge

(i) calibrated confidence: the robot should seek sufficient help to ensure a statistically guaranteed level of task success specified by the user, and

(ii) minimal help: the robot should minimize the overall amount of help it seeks by narrowing down possible ambiguities in a task.

Core method

CP: conformal prediction

environment setting

Calibrated confidence

robots that ask for help

Goal uncertainty alignment

Conformal Prediction

Multi-step uncertainty alignments

Experiments

introduce three settings based on different types of ambiguities in the user instruction:

Results

image-20240128000714681

Potential future work

ask should predicate be in what kind of relationship in the action -> solve ambiguity