Skip to content

wuxiaofei01/FastGrasp

Repository files navigation

FastGrasp: Efficient Grasp Synthesis with Diffusion

Xiaofei Wu · Tao Liu · Caoji Li · Yuexin Ma · Yujiao Shi . Xuming He

3DV 2025

Paper PDF

GIF 1 GIF 2 GIF 3

GIF 4 GIF 5 GIF 6

GIF 7 GIF 8 GIF 9

This repo contains the training and evaluation of FastGrasp on OakInk-Shape , grab , HO-3D dataset.

Table of content

Installation

Create a conda env from environment.yml:

conda env create -f environment.yml  
conda activate fast_grasp  

Install dependencies:

pip install -r requirements.txt
pip install -r [email protected]
conda install -c conda-forge igl
git clone https://github.com/lixiny/manotorch.git
cd manotorch
pip install .
cd ..
git clone https://github.com/SLIDE-3D/SLIDE.git
cd SLIDE/pointnet2_ops_lib
pip install -e .

If you find that the installation of pytorch3d fails, please refer to the link

  • maybe your cuda(nvcc) is not supported , 11.4 is in compliance with the version requirementsis

Get the MANO hand model:

cp -r {path_to}/mano_v1_2 ./assets

Download the pretrained model weights from Huffing face and put the contents in ./checkpoints.

Data prepared

  • install the V-HACD for building the simulation of grasp displacement. you need change you path in here and here (must absolute path , The repository already contains testVHACD. If you have any problems, please try to install it yourself.)
  • Download the processed GRAB dataset from here and unzip to current directory, like FastGrasp/grab_data
  • Download the processed Oakink-shape dataset from Huffing face and unzip to FastGrasp/data/.
  • Download HO-3D object models from here, unzip and put into FastGrasp/dataset/HO3D_Object_models.

The file directory structure is as follows:

FastGrasp/
  assets/
    mano_v1_2/
  checkpoints/
    ae.pth
    diffusion.pth
    am.pth
  data/
    evaluation/
      obj_faces_test.npy
      obj_verts_test.npy
      ......
    precessed/
      hand_param_test.npy
      obj_pc_test.npy
      ......
  grab_data/
    sbj_info.npy
    ......
  dataset/
    HO3D_Object_models/
      003_cracker_box/
      ......
  testV-HACD

If you want to be able to process your own datasets, please refer to here. this file is used to preprocess the oakink dataset to speed up model training.

Evaluation

Evaluate grasp quality

The evaluation metrics include:

  • Penetration Depth
  • Penetration Volume
  • Simulation Displacement
  • contact_ratio
  • entropy
  • cluster_size

After executing the evaluation, the visualization results will be automatically saved in the same path as diffusion.pth. For the specific visualization code, please refer to here and here.

Evaluation all pipeline

Grab dataset

python eval_adapt.py --config config/grab/eval_adapt.json  

Oakink dataset

python eval_adapt.py --config config/oakink/eval_adapt.json  

HO-3D dataset

python eval_ho3d.py --config config/grab/ho3d.json  

Evaluation autoencoder module

Grab dataset

python eval_ae.py --config config/grab/eval_ae.json  

Oakink dataset

python eval_ae.py --config config/oakink/eval_ae.json  
  • guide_w: Classifier weight
  • ddim_step: ddim inference steps
  • penetr_vol_threandsimu_disp_thre follow GraspTTA.

Please note that the pre-trained model uses checkpoints obtained from training on the GRAB dataset. Since simulation is required to calculate displacement, the evaluation time is dependent on the number of datasets used.

The evaluation times are as follows:

  1. HO3D takes approximately 5 minutes
  2. GRAB takes around 6 minutes
  3. OAKINK takes about 5 hours.

The evaluation results will be saved at args.diffusion.path/:

Training

We first detail several argparse options

  • -config: path to the config file.

For details please refer to config.json. Training checkpoints will be saved at logs.

It takes about 1 week to fully train the entire pipeline, but since we have decoupled it into three parts, you can choose to use any pre-trained model and train your data directly. For example, use the AE, diffusion pre-trained module and only train the adapt module (represented as adapt layer in the code)

Oakink-Dataset training

Autoencoder model on the OakInk-Shape train set:

# train autoencoder model 
python train_ae.py --config config/oakink/ae.json

# train diffusion model
python train_diffusion.py --config config/oakink/diffusion.json

# train adapt module
python train_adapt.py --config config/oakink/adapt.json

Grab-Dataset training

Autoencoder model on the Grab train set:

# train autoencoder model 
python train_ae.py --config config/grab/ae.json

# train diffusion model
python train_diffusion.py --config config/grab/diffusion.json

# train adapt module
python train_adapt.py --config config/grab/adapt.json

Please note that at different step stages, you need to use the checkpoints obtained from the previous stage of training.

You need to set the file_name in the json file, which is the path where your log file is saved

Citation

If you find FastGrasp useful for your research, please considering cite us:

@inproceedings{Wu2024FastGraspEG,
  title={FastGrasp: Efficient Grasp Synthesis with Diffusion},
  author={Xiaofei Wu and Tao Liu and Caoji Li and Yuexin Ma and Yujiao Shi and Xuming He},
  year={2024},
  url={https://api.semanticscholar.org/CorpusID:274192568}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages