- Our study, PrimA6D, is accepted for RA-L.
- The extended version, PrimA6D++, is accepted for RA-L.
- Method Summary
- Source Code Release for Pose Estimation Method
- Source Code Release for Multi-Object Pose Optimization Method
-
PrimA6D (RA-L 2020)
- PrimA6D reconstructs the rotation primitive and its associated keypoints corresponding to the target object for enhancing the orientation inference.
-
PrimA6D++ (RA-L 2022)
- PrimA6D++ estimates three rotation axis primitive images and their associated uncertainties.
- With estimated uncertainties, PrimA6D++ handles object ambiguity without prior information on object shape.
-
Object-SLAM for Multi-Object Pose Optimization (RA-L 2022)
- Leveraging the uncertainty, we formulate the problem as an object-SLAM to optimize multi-object poses.
-
Download Repo
$ git clone [email protected]:rpmsnu/PrimA6D.git
-
Docker Image Download & Run
$ docker pull jmong1994/jeon:prima6d_new
We provide a docker image with an environment setup. You can download this docker image on the docker hub.
-
3D Model Download
Download 3D Models.$ mv 3d_models.zip /path/to/PrimA6D/Pose-Estimation/dataset/3d_model/ $ cd /path/to/PrimA6D/Pose-Estimation/dataset/3d_model/ $ unzip 3d_models.zip
-
Dataset Download
- Download Sun2012Pascal and BOP dataset
$ cd /path/to/PrimA6D/Pose-Estimation/dataset/raw_dataset $ bash get_sun2012pascalformat.sh $ cd bop $ bash get_bop_ycbv.sh
-
Inference
- Run docker
$ xhost +local:docker $ docker run --gpus all -it --env="DISPLAY" --net=host --ipc=host --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v /:/mydata jmong1994/jeon:prima6d_new bash $ export PrimA6D_path="/path/to/PrimA6D"
- Prepare data
$ cd $PrimA6D_path/Pose-Estimation/dataset/YCB $ python3 YCB_test.py -o=[obj_id]
For example, to prepare the No.1 of YCB object,
python3 YCB_train_synthetic.py -o=1
- Test Model
$ cd $PrimA6D_path/Pose-Estimation/PrimA6D $ python3 4_test_all.py -o=[obj_id] -w
For example, to infer the No.1 of YCB object,
python3 4_test_all.py -o=1 -w
For the corresponding object,
Download PrimA6D weights and save these to$PrimA6D_path/Pose-Estimation/PrimA6D/trained_weight
.
Download Segmentation weights and save these to$PrimA6D_path/Pose-Estimation/Segmentation/trained_weight
. -
Train
- Run docker
$ xhost +local:docker $ docker run --gpus all -it --env="DISPLAY" --net=host --ipc=host --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v /:/mydata jmong1994/jeon:prima6d_new bash $ export PrimA6D_path="/path/to/PrimA6D"
- Prepare data
$ cd $PrimA6D_path/Pose-Estimation/dataset/YCB $ python3 YCB_train_synthetic.py -o=[obj_id] $ python3 YCB_train_pbr.py -o=[obj_id] $ python3 YCB_train_real.py -o=[obj_id] $ python3 YCB_test.py -o=[obj_id]
For example, to prepare the No.1 of YCB object,
python3 YCB_train_synthetic.py -o=1
- Train & Test model
$ cd $PrimA6D_path/Pose-Estimation/PrimA6D $ python3 1_train_generator.py -o=[obj_id] $ python3 2_train_keypoint.py -o=[obj_id] $ python3 3_train_translation.py -o=[obj_id] $ python3 4_test_all.py -o=[obj_id]
For example, to train the No.1 of YCB object,
python3 1_train_generator.py -o=1
-
Inference
- Run docker
$ xhost +local:docker $ docker run --gpus all -it --env="DISPLAY" --net=host --ipc=host --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v /:/mydata jmong1994/jeon:prima6d_new bash $ export PrimA6D_path="/path/to/PrimA6D"
- Prepare data
$ cd $PrimA6D_path/Pose-Estimation/dataset/YCB $ python3 YCB_test.py -o=[obj_id]
For example, to prepare the No.1 of YCB object,
python3 YCB_train_synthetic.py -o=1
- Test model
$ cd $PrimA6D_path/Pose-Estimation/PrimA6D++ $ python3 test_prima6d.py -o=[obj_id] -w
For example, to infer the No.1 of YCB object,
python3 test_prima6d.py -o=1 -w
For the corresponding object,
Download PrimA6D++ weights and save these to$PrimA6D_path/Pose-Estimation/PrimA6D/trained_weight
.
Download Segmentation weights and save these to$PrimA6D_path/Pose-Estimation/Segmentation/trained_weight
. -
Train
- Run docker
$ xhost +local:docker $ docker run --gpus all -it --env="DISPLAY" --net=host --ipc=host --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v /:/mydata jmong1994/jeon:prima6d_new bash $ export PrimA6D_path="/path/to/PrimA6D"
- Prepare data
$ cd $PrimA6D_path/Pose-Estimation/dataset/YCB $ python3 YCB_train_synthetic.py -o=[obj_id] $ python3 YCB_train_pbr.py -o=[obj_id] $ python3 YCB_train_real.py -o=[obj_id] $ python3 YCB_test.py -o=[obj_id]
For example, to prepare the No.1 of YCB object,
python3 YCB_train_synthetic.py -o=1
- Train & Test model
$ cd $PrimA6D_path/Pose-Estimation/PrimA6D++ $ python3 train_prima6d.py -o=[obj_id] $ python3 test_prima6d.py -o=[obj_id]
For example, to train the No.1 of YCB object,
python3 train_prima6d.py -o=1
-
Real-Time Demo with ROS
-
Prepare data
Download simple_demo.bag. -
Run model
$ roscore
In the new terminal,
$ xhost +local:docker $ docker run --gpus all -it --env="DISPLAY" --net=host --ipc=host --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v /:/mydata jmong1994/jeon:prima6d_new bash $ export PrimA6D_path="/path/to/PrimA6D" $ cd $PrimA6D_path/Pose-Estimation/ros $ python3 ros_PrimD_torch.py -o=[obj_id]
For example, to run the No.4 and No.5 of YCB object,
python3 ros_PrimD_torch.py -o="4 5"
In the new terminal,
$ rosbag play simple_demo.bag
- Check result You can check the result using the rviz in the new terminal.
You can also run this real-time demo using a camera that provides 640x480 RGB data.
-
-
Download Data Processor for file player
refer to Data Processor
-
install osmesa
$ export PrimA6D_path="/path/to/PrimA6D" $ cd $PrimA6D_path/Multi-Object-Pose-Optimization/src/pose_optimization/bop_renderer/osmesa-install/build $ sudo mkdir /opt/osmesa $ sudo chmod 777 /opt/osmesa $ sudo mkdir /opt/llvm $ sudo chmod 777 /opt/llvm $ sudo bash ../osmesa-install.sh $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/osmesa/lib:/opt/llvm/lib
-
build ros package
$ export PrimA6D_path="/path/to/PrimA6D" $ cd $PrimA6D_path/Multi-Object-Pose-Optimization $ catkin build --save-config --cmake-args -DCMAKE_BUILD_TYPE=Release
-
Downliad 3D models
- download 3d_model.
- unzip to $PrimA6D_path/Multi-Object-Pose-Optimization/src/pose_optimization/3d_model/
-
run
$ cd $PrimA6D_path/Multi-Object-Pose-Optimization $ source devel/setup.bash $ roslaunch pose_optimization run_franka.launch
And then,
Play sensor data using Data Processor
Please consider citing the paper as:
@ARTICLE{jeon-2020-prima6d,
author={Jeon, Myung-Hwan and Kim, Ayoung},
journal={IEEE Robotics and Automation Letters},
title={PrimA6D: Rotational Primitive Reconstruction for Enhanced and Robust 6D Pose Estimation},
year={2020},
volume={5},
number={3},
pages={4955-4962},
doi={10.1109/LRA.2020.3004322}
}
@ARTICLE{jeon-2022-prima6d,
author={Jeon, Myung-Hwan and Kim, Jeongyun and Ryu, Jee-Hwan and Kim, Ayoung},
journal={IEEE Robotics and Automation Letters},
title={Ambiguity-Aware Multi-Object Pose Optimization for Visually-Assisted Robot Manipulation},
year={2023},
volume={8},
number={1},
pages={137-144},
doi={10.1109/LRA.2022.3222998}
}
If you have any questions, contact here please