AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing (ECCV 2022, Official Code)
Jiaxi Jiang1, Paul Streli1, Huajian Qiu1, Andreas Fender1, Larissa Laich2, Patrick Snape2, Christian Holz1
1 Sensing, Interaction & Perception Lab, Department of Computer Science, ETH Zürich, Switzerland
2 Reality Labs at Meta, Zurich, Switzerland
🔥🔥 2024-07: Our follow-up work EgoPoser is accepted by ECCV 2024! EgoPoser focuses on egocentric inside-out body tracking by spatio-temporal motion decomposition, hand-tracking modeling, and body shape estimation.
🔥🔥 2024-07: Our follow-up work MANIKIN is accepted by ECCV 2024! MANIKIN focuses on biomechanically accurate body modeling and efficient neural inverse kinematics.
🚀🚀 2024-06: AvatarPoser serves as a state-of-the-art method on the Nymeria dataset (ECCV 2024). Nymeria is the world's largest human motion dataset, with egocentric videos recorded using Meta's Aria glasses.
🚀🚀 2023-10: AvatarPoser serves as a state-of-the-art method on the Ego-Exo4D dataset (CVPR 2024). Ego-Exo4D is a large-scale multimodal dataset designed for multiple tasks based on egocentric videos recorded using Meta's Aria glasses.
📢📢 2023-08: We release our follow-up work EgoPoser for egocentric inside-out body tracking on Arxiv.
📢📢 2022-10: AvatarPoser was also honored as one of eleven ECCV 2022 demonstrations. Videos are available on the project page.
🔥🔥 2022-07: AvatarPoser is accepted by ECCV 2022, and our code is publicly available! We are the first method that can generate various full-body motions using only head and hand tracking signals. We now have full-body avatars in Metaverse!
Today's Mixed Reality head-mounted displays track the user's head pose in world space as well as the user's hands for interaction in both Augmented Reality and Virtual Reality scenarios. While this is adequate to support user input, it unfortunately limits users' virtual representations to just their upper bodies. Current systems thus resort to floating avatars, whose limitation is particularly evident in collaborative settings. To estimate full-body poses from the sparse input sources, prior work has incorporated additional trackers and sensors at the pelvis or lower body, which increases setup complexity and limits practical application in mobile settings. In this paper, we present AvatarPoser, the first learning-based method that predicts full-body poses in world coordinates using only motion input from the user's head and hands. Our method builds on a Transformer encoder to extract deep features from the input signals and decouples global motion from the learned local joint orientations to guide pose estimation. To obtain accurate full-body motions that resemble motion capture animations, we refine the arm joints' positions using an optimization routine with inverse kinematics to match the original tracking input. In our evaluation, AvatarPoser achieved new state-of-the-art results in evaluations on large motion capture datasets (AMASS). At the same time, our method's inference speed supports real-time operation, providing a practical interface to support holistic avatar control and representation for Metaverse applications.
- Please download the datasets
BMLrub
,CMU
, andHDM05
from AMASS. - Download the required body model and placed them in
support_data/body_models
directory of this repository. For SMPL+H body model, download it from http://mano.is.tue.mpg.de/. Please download the AMASS version of the model with DMPL blendshapes. You can obtain dynamic shape blendshapes, e.g. DMPLs, from http://smpl.is.tue.mpg.de - (Optional) If you want to have new random data split, run
generate_split.py
- Run
prepare_data.py
to preprocess the input data for faster training. The data split for training and testing data in our paper is stored under the folderdata_split
.
For training, please run:
python main_train_avatarposer.py -opt options/train_avatarposer.json
For testing, please run:
python main_test_avatarposer.py
Click Pretrained Models to download our pretrained model for AvatarPoser, and put it into model_zoo
.
If your find our paper or codes useful, please cite our work:
@inproceedings{jiang2022avatarposer,
title={AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing},
author={Jiang, Jiaxi and Streli, Paul and Qiu, Huajian and Fender, Andreas and Laich, Larissa and Snape, Patrick and Holz, Christian},
booktitle={Proceedings of European Conference on Computer Vision},
year={2022},
organization={Springer}
}
This project is released under the MIT license. We refer to the code framework in FBCNN and KAIR for network training.