Code repository for the paper:
MoMa: Skinned Motion Retargeting Using Masked Pose Modeling
Giulia Martinelli, Nicola Garau, Niccolò Bisagno, Nicola Conci
CVIU 2024
[paper] [project page]
Run the following command to configure the virtual environment:
pipenv install
pip install -r requirements.txt
-
PyTorch3D: Follow the official installation guide
🔗 PyTorch3D Installation -
Blender: Refer to the official documentation
🔗 Blender API Quickstart -
Mesh Intersection: Follow the installation instructions from the official repository
🔗 Torch Mesh Intersection
Download the following datasets and organize them in the specified folders:
- Download animations from Mixamo.
- Organize them as:
MIXAMO/
├── Character_1/
│ ├── animation_1.bvh
│ ├── animation_2.bvh
│ └── ...
├── Character_2/
├── Character_3/
└── ...
- Download the dog animations from AI4Animation.
- Place all
.bvh
files inside:
HumanDog/
├── Dog/
│ ├── animation_1.bvh
│ ├── animation_2.bvh
│ └── ...
- Download human animations from Ubisoft LaForge Animation Dataset.
- Place all
.bvh
files inside:
HumanDog/
├── Human/
│ ├── animation_1.bvh
│ ├── animation_2.bvh
│ └── ...
Before starting training, configure the dataset settings in data_flags.py
.
Modify the following variables:
-
dataset_path
: Path to the dataset directory.- Mixamo Dataset: The main folder should contain multiple character subfolders, each with
.bvh
animation files. - HumanDog Dataset: The main folder should contain two subfolders:
Human/
→ Contains.bvh
animation files for human motions.Dog/
→ Contains.bvh
animation files for dog motions.
- Mixamo Dataset: The main folder should contain multiple character subfolders, each with
-
dataset
: Choose the dataset name:"MIXAMO"
→ for the Mixamo dataset."HumanDog"
→ for the HumanDog dataset.
-
n_joints
: Number of joints in the dataset’s superskeleton:25
→ for Mixamo.26
→ for HumanDog.
-
mode
: Set the mode to:"train"
→ to enable training.
Once the dataset configuration is set, start training by running:
python main.py
Once it is trained, the collision in the resulting animation can be solved by running:
python shape_optimization.py
Remember to set the correct character name and bvh_path in the shape_optimization.py
file
The checkpoints can be downloaded here:
If you find this code useful for your research or use the data generated by our method, please consider citing the following paper:
@article{martinelli2024moma,
title={MoMa: Skinned motion retargeting using masked pose modeling},
author={Martinelli, Giulia and Garau, Nicola and Bisagno, Niccol{\'o} and Conci, Nicola},
journal={Computer Vision and Image Understanding},
volume={249},
pages={104141},
year={2024},
publisher={Elsevier}
}