Improving Knowledge Extraction from LLMs for Robotic Task Learning through Agent Analysis

06/11/2023
by   James R. Kirk, et al.
0

Large language models (LLMs) offer significant promise as a knowledge source for robotic task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM but alone is insufficient for acquiring relevant, situationally grounded knowledge for an embodied robotic agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations, and thus enabling a robot to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous robot, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how a robot, by retrieving and evaluating a breadth of responses from the LLM, can achieve >75 oversight. The approach achieves 100 (such as indication of preference) is provided, while greatly reducing how much human oversight is needed.

READ FULL TEXT

page 5

page 13

page 29

research
09/13/2022

Improving Language Model Prompting in Support of Semi-autonomous Task Learning

Language models (LLMs) offer potential as a source of knowledge for agen...
research
09/17/2021

Language Models as a Knowledge Source for Cognitive Agents

Language models (LMs) are sentence-completion engines trained on massive...
research
08/19/2022

Evaluating Diverse Knowledge Sources for Online One-shot Learning of Novel Tasks

Online autonomous agents are able to draw on a wide variety of potential...
research
09/02/2023

Developmental Scaffolding with Large Language Models

Exploratoration and self-observation are key mechanisms of infant sensor...
research
09/21/2023

Evaluating Large Language Models for Document-grounded Response Generation in Information-Seeking Dialogues

In this paper, we investigate the use of large language models (LLMs) li...
research
07/09/2021

A Comparison of Contextual and Non-Contextual Preference Ranking for Set Addition Problems

In this paper, we study the problem of evaluating the addition of elemen...
research
05/12/2018

Generating Rescheduling Knowledge using Reinforcement Learning in a Cognitive Architecture

In order to reach higher degrees of flexibility, adaptability and autono...

Please sign up or login with your details

Forgot password? Click here to reset