Learning a perception and reasoning module for robotic assistants to plan steps to perform complex tasks based on natural language instructions often requires large free-form language annotations, especially for short high-level instructions. To reduce the cost of annotation, large language models (LLMs) are used as a planner with few data. However, when elaborating the steps, even the state-of-the-art planner that uses LLMs mostly relies on linguistic common sense, often neglecting the status of the environment at command reception, resulting in inappropriate plans. To generate plans grounded in the environment, we propose FLARE (Few-shot Language with environmental Adaptive Replanning Embodied agent), which improves task planning using both language command and environmental perception. As language instructions often contain ambiguities or incorrect expressions, we additionally propose to correct the mistakes using visual cues from the agent. The proposed scheme allows us to use a few language pairs thanks to the visual cues and significantly outperforms state-of-the-art approaches (by twice the success rate in unseen environments of the ALFRED benchmark: 16.42% → 40.88%).
@inproceedings{kim2024flare,
author = {Kim, Taewoong and Kim, Byeonghwi and Choi, Jonghyun},
title = {Multi-Modal Grounded Planning and Efficient Replanning For Learning Embodied Agents with A Few Examples},
booktitle = {AAAI},
year = {2025},
}