Skip to content

Official Implementation of 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs

Notifications You must be signed in to change notification settings

sled-group/3D-GRAND

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 

Repository files navigation

3D-GRAND: A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination

3D-GRAND

Paper Project Page Hugging Face

This repository is the implementation of

3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs

Affiliation: 1University of Michigan, 2New York University

*Equal contribution

Updates🔥

  • Our demo code about 3D-GRAND is released and you can checkout our paper as well!

Overview 📖

The integration of language and 3D perception is crucial for developing embodied agents and robots that comprehend and interact with the physical world. While large language models (LLMs) have demonstrated impressive language understanding and generation capabilities, their adaptation to 3D environments (3D-LLMs) remains in its early stages. A primary challenge is the absence of large-scale datasets that provide dense grounding between language and 3D scenes. In this paper, we introduce 3D-GRAND, a pioneering large-scale dataset comprising 40,087 household scenes paired with 6.2 million densely-grounded scene-language instructions. Our results show that instruction tuning with 3D-GRAND significantly reduces hallucinations and enhances the grounding capabilities of 3D-LLMs compared to models trained without dense grounding. As part of our contributions, we propose a comprehensive benchmark 3D-POPE to systematically evaluate hallucination in 3D-LLMs, enabling fair comparisons among future models. Our experiments underscore a scaling effect between dataset size and 3D-LLM performance, emphasizing the critical role of large-scale 3D-text datasets in advancing embodied AI research. Through 3D-GRAND and 3D-POPE, we aim to equip the embodied AI community with essential resources and insights, setting the stage for more reliable and better-grounded 3D-LLMs.

In this repository, we release demo code for a model trained with 3D-GRAND.

Quick Start🔨

1. Clone Repo

git clone https://github.com/3d-grand/3d_grand_demo.git
cd 3d_grand_demo

2. Prepare Environment

conda create -n 3d_grand_hf python=3.10 -y
conda activate 3d_grand_hf
pip install -r demo/requirements.txt
pip install spaces

3. Download Checkpoints

Quickstart guide

git lfs install
git clone https://huggingface.co/spaces/jedyang97/3D-GRAND

🤗 Gradio Demo

We provide a Gradio Demo to demonstrate our method with UI.

gradio 3d-grand-demo.py

Alternatively, you can try the online demo hosted on Hugging Face: [demo link].

Citation 🖋️

If you find our repo useful for your research, please consider citing our paper:

@misc{3d_grand,
    title={3D-GRAND: A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination},
    author={Jianing Yang and Xuweiyi Chen and Nikhil Madaan and Madhavan Iyengar and Shengyi Qian and David F. Fouhey and Joyce Chai},
    year={2024},
    eprint={2406.05132},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

About

Official Implementation of 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published