RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation

1University of Illinois Urbana-Champaign, 2Amazon, 3Peking University, 4National University of Singapore

RoboEXP constructs the Action-Conditioned Scene Graph (ACSG) during the interactive exploration process,
which is utilized for the robot to complete the downstream task of making the table.


RoboEXP can zero-shot explore various real-world settings,
demonstrating its effectiveness in exploring and modeling environments it has never seen before.

Abstract

Robots need to explore their surroundings to adapt to and tackle tasks in unknown environments. Prior work has proposed building scene graphs of the environment but typically assumes that the environment is static, omitting regions that require active interactions. This severely limits their ability to handle more complex tasks in household and office environments: before setting up a table, robots must explore drawers and cabinets to locate all utensils and condiments. In this work, we introduce the novel task of interactive scene exploration, wherein robots autonomously explore environments and produce an action-conditioned scene graph (ACSG) that captures the structure of the underlying environment. The ACSG accounts for both low-level information, such as geometry and semantics, and high-level information, such as the action-conditioned relationships between different entities in the scene. To this end, we present the Robotic Exploration (RoboEXP) system, which incorporates the Large Multimodal Model (LMM) and an explicit memory design to enhance our system's capabilities. The robot reasons about what and how to explore an object, accumulating new information through the interaction process and incrementally constructing the ACSG. We apply our system across various real-world settings in a zero-shot manner, demonstrating its effectiveness in exploring and modeling environments it has never seen before. Leveraging the constructed ACSG, we illustrate the effectiveness and efficiency of our RoboEXP system in facilitating a wide range of real-world manipulation tasks involving rigid, articulated objects, nested objects like Matryoshka dolls, and deformable objects like cloth.



Video

Interactive Exploration

Low-level Memory



We formulate interactive exploration as an action-conditioned 3D scene graph (ACSG) construction and traversal problem. Our ACSG is an actionable, spatial-topological representation that models objects and their interactive and spatial relations in a scene, capturing both the high-level graph (c) and corresponding low-level memory (b).



RoboEXP



Our RoboEXP system comprises four modules. With RGB-D observations as input, our perception module (a) and memory module (b) construct our ACSG leveraging the vision foundation models and our explicit memory design. The ACSG is then utilized by the decision-making module (c) to generate exploration plans, and the action module (d) executes them.

Exploration with Human Interventions

Our RoboEXP system is capable of handling human interventions during the exploration process. Our system can automatically detect new objects and explore them when necessary. Additionally, our system can also track hand position to identify the areas that need to be reexplored.

"Position a cabinet on the table."

"Take the orange out of the left door and place a coke in the right door."

More Results with Diverse Settings

We display more results with varied numbers of objects, types, and layouts in our experimental settings.

BibTeX

@article{jiang2024roboexp,
      title={RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation},
      author={Jiang, Hanxiao and Huang, Binghao and Wu, Ruihai and Li, Zhuoran and Garg, Shubham and Nayyeri, Hooshang and Wang, Shenlong and Li, Yunzhu},
      journal={arXiv preprint arXiv:2402.15487},
      year={2024}
    }