Skip to content

This contains the simulation of a kinova robot and the code for collecting data and training both a grasp classifier and a RL agent

Notifications You must be signed in to change notification settings

HankerSia/KinovaGrasping

 
 

Repository files navigation

Learning "near-contact" grasping strategy with Deep Reinforcement Learning

This is an implementation of Deep Deterministic Policy Gradient from Demonstration (DDPGfD) to train a policy to perform "near-contact" grasping tasks, where object's starting position is random within graspable region. We took one "near-contact" strategy from this paper as expert demonstration and train a RL controller to handle a variety of objects with random starting position.

This environment runs on MuJoCo with an intergration of OpenAI gym to facilitate the data collection and traning process.

Requirements: Pytorch 1.2.0 and Python 3.7

Instructions

There are three experiments to run for two conditions: with and without grasp classifier, in this case we are using state space in global coordinate system.

At kinova_env_gripper.py, look at def randomize_all function. Change the arguments of self.experiment for different experiment number and stage number accordingly. For example, to run experiment 1 stage 1, At line 581, objects = self.experiment(1, 1) → the first number is experiment number while the second is stage number. Run the commands on terminal below for corresponding experiment.

Commands on terminal

Experiments without grasp classifier

Experiment 1 stage 1 (varying sizes)

python main_DDPGfD.py --tensorboardindex exp1s1_wo_graspclassifier --saving_dir exp1s1_wo_graspclassifier

Experiment 1 stage 2 (varying shapes)

python main_DDPGfD.py --tensorboardindex exp1s2_wo_graspclassifier --saving_dir exp1s2_wo_graspclassifier

Experiment 2 stage 1 (varying shapes)

python main_DDPGfD.py --tensorboardindex exp2s1_wo_graspclassifier --saving_dir exp2s1_wo_graspclassifier

Experiment 2 stage 2 (varying sizes)

python main_DDPGfD.py --tensorboardindex exp2s2_wo_graspclassifier --saving_dir exp2s2_wo_graspclassifier

Experiment 3 (all objects)

python main_DDPGfD.py --tensorboardindex exp3_wo_graspclassifier --saving_dir exp3_wo_graspclassifier

Experiments with grasp classifier

Experiment 1 stage 1 (varying sizes)

python main_DDPGfD.py --tensorboardindex exp1s1_w_graspclassifier --saving_dir exp1s1_w_graspclassifier

Experiment 1 stage 2 (varying shapes)

python main_DDPGfD.py --tensorboardindex exp1s2_w_graspclassifier --saving_dir exp1s2_w_graspclassifier

Experiment 2 stage 1 (varying shapes)

python main_DDPGfD.py --tensorboardindex exp2s1_w_graspclassifier --saving_dir exp2s1_w_graspclassifier

Experiment 2 stage 2 (varying sizes)

python main_DDPGfD.py --tensorboardindex exp2s2_w_graspclassifier --saving_dir exp2s2_w_graspclassifier

Experiment 3 (all objects)

python main_DDPGfD.py --tensorboardindex exp3_w_graspclassifier --saving_dir exp3_w_graspclassifier

About

This contains the simulation of a kinova robot and the code for collecting data and training both a grasp classifier and a RL agent

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.6%
  • MATLAB 0.4%