This is the Transformer-based model designed for HandMeThat dataset (version 2). See project page.
Please refer to HandMeThat benchmark. Download the version 2 dataset.
Clone the repositories in the same directory as HandMeThat.
Put this repository in the same directory as HandMeThat.
Prepare the environment.
cd FISER
conda activate hand-me-that
Add the packages to your PYTHONPATH
environment variable.
export PYTHONPATH=.:$PYTHONPATH:<path_to_HandMeThat>:<path_to_Jacinle>
Preprocess the dataset into tensors. Set <GOAL>
from 0 to 24 to go over all data.
python dataloaders/store_to_disk.py -goal <GOAL>
The tensors are stored in a separate folder <DATA_DIR>/preprocessed
.
Or, directly use the following command.
bash store_data_to_disk.sh
Set data path and model path in scripts/train.py
.
Train end-to-end model:
python scripts/train.py --device cuda:0 --seed 0 --pipeline e2e --mid obj --num_epoch <N_EPOCH>
Train reasoning model:
python scripts/train.py --device cuda:0 --seed 0 --pipeline r --mid obj --num_epoch <N_EPOCH>
Train planning model:
python scripts/train.py --device cuda:0 --seed 0 --pipeline p --mid obj --num_epoch <N_EPOCH>
Or, directly use the following command. Or, directly use the following command.
bash local_submit_job.sh <device> <pipeline> <mid>
Set data path and model path in scripts/eval_in_env.py
.
Test end-to-end model:
python scripts/eval_in_env.py --device cuda:0 --e2e_seed 0 --pipeline e2e --mid obj --e2e_epoch_id <E2E_ID>
Test reasoning + planning model:
python scripts/eval_in_env.py --device cuda:0 --r_seed 0 --p_seed 0 --pipeline rp --mid obj --r_epoch_id <R_ID> --p_epoch_id <P_ID>