This is the official implementation for "A cross-feature interaction network for 3D human pose estimation (Pattern Recognition Letters 2025)" on PyTorch platform.
Make sure you have the following dependencies installed:
- Ubuntu 20.04
- CUDA 11.2
- Python 3.7.13
- PyTorch 1.8.1
- Matplotlib=3.1.0
Our model is evaluated on Human3.6M and MPI-INF-3DHP.
We set up the Human3.6M dataset in the same way as VideoPose3D.
We set up the MPI-INF-3DHP dataset in the same way as P-STMO.
For training the model on Human3.6m using 2D keypoints obtained by CPN, please run:
python run.py -k cpn_ft_h36m_dbb --train --batch_size 512 --epoch 20
For training the model on Human3.6m using ground-truth 2D keypoints, please run:
python run.py -k gt --train --batch_size 256 --epoch 20
You can download our pre-trained models from Google Drive. Put model_cfi_gt.pth
, model_cfi_cpn.pth
in the ./checkpoint
directory. Both of the models are trained on Human3.6M dataset.
To evaluate the model trained on Human3.6m using 2D keypoints obtained by CPN, run:
python run.py -k cpn_ft_h36m_dbb --evaluate --previous_dir checkpoint/model_cfi_cpn.pth
To evaluate the model trained on Human3.6m using ground-truth 2D keypoints, run:
python run.py -k gt --evaluate --previous_dir checkpoint/model_cfi_gt.pth
To evaluate the model on the test set of MPI-INF-3DHP, run:
python run.py --dataset 3dhp -k cpn_ft_h36m_dbb --evaluate --previous_dir checkpoint/model_cfi_cpn.pth
Our code refers to the following repositories.
We thank the authors for releasing their codes. If you use our code, please consider citing our paper as well.