Skip to content

hgaurav2k/hop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hand-object Interaction Pretraining from Videos

This repo contains code for the paper Hand-object interaction Pretraining from Videos

For a brief overview, check out the project webpage!

For any questions, please contact Himanshu Gaurav Singh.

Setup

  • Create conda environment using conda env create -f env.yml
  • Install IsaacGym in this environment.
  • Download the asset folder and put them in the root directory.

Running the code

Pretraining

  • Download the hand-object interaction dataset from here. Extract using tar -xf hoi_pretraining_data.tar.xz. Put it under the root directory.
  • Run bash scripts/pretrain.sh <DATADIR>

Finetuning

  • Download pretrained checkpoint from here. You can also use your own trained checkpoint.
  • For your choice of task, run bash scripts/finetune/finetune_{task}.sh.

Visualising trained policies

  • Run bash scripts/run_policy.sh <PATH_TO_POLICY>.

Citation

Acknowledgment

This work was supported by the DARPA Machine Common Sense program, the DARPA Transfer from Imprecise and Abstract Models to Autonomous Technologies (TIAMAT) program, and by the ONR MURI award N00014-21-1-2801. This work was also funded by ONR MURI N00014-22-1-2773. We thank Adhithya Iyer for assistance with teleoperation systems, Phillip Wu for setting-up the real robot, and Raven Huang, Jathushan Rajasegaran and Yutong Bai for helpful discussions.

About

Hand-object interaction Pretraining From Videos

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •