English | 简体中文
XRMoCap is an open-source PyTorch-based codebase for the use of multi-view motion capture. It is a part of the OpenXRLab project.
If you are interested in single-view motion capture, please refer to mmhuman3d for more details.
github_demo_lq264.mp4
A detailed introduction can be found in introduction.md.
-
Support popular multi-view motion capture methods for single person and multiple people
XRMoCap reimplements SOTA multi-view motion capture methods, ranging from single person to multiple people. It supports an arbitrary number of calibrated cameras greater than 2, and provides effective strategies to automatically select cameras.
-
Support keypoint-based and parametric human model-based multi-view motion capture algorithms
XRMoCap supports two mainstream motion representations, keypoints3d and SMPL(-X) model, and provides tools for conversion and optimization between them.
-
Integrate optimization-based and learning-based methods into one modular framework
XRMoCap decomposes the framework into several components, based on which optimization-based and learning-based methods are integrated into one framework. Users can easily prototype a customized multi-view mocap pipeline by choosing different components in configs.
- 2022-12-21: XRMoCap v0.7.0 is released. Major updates include:
- Add mview_mperson_end2end_estimator for learning-based method
- Add SMPLX support and allow smpl_data initiation in
mview_sperson_smpl_estimator
- Add multiple optimizers, detailed joint weights and priors, grad clipping for better SMPLify results
- Add mediapipe_estimator for human keypoints2d perception
- 2022-10-14: XRMoCap v0.6.0 is released. Major updates include:
- Add 4D Association Graph, the first Python implementation to reproduce this algorithm
- Add Multi-view multi-person top-down smpl estimation
- Add reprojection error point selector
- 2022-09-01: XRMoCap v0.5.0 is released. Major updates include:
- Support HuMMan Mocap toolchain for multi-view single person SMPL estimation
- Reproduce MvP, a deep-learning-based SOTA for multi-view multi-human 3D pose estimation
- Reproduce MVPose (single frame) and MVPose (temporal tracking and filtering), two optimization-based methods for multi-view multi-human 3D pose estimation
- Support SMPLify, SMPLifyX, SMPLifyD and SMPLifyXD
More details can be found in benchmark.md.
Supported methods:
(click to collapse)
- SMPLify (ECCV'2016)
- SMPLify-X (CVPR'2019)
- MVPose (Single frame) (CVPR'2019)
- MVPose (Temporal tracking and filtering) (T-PAMI'2021)
- Shape-aware 3D Pose Optimization (ICCV'2019)
- MvP (NeurIPS'2021)
- HuMMan MoCap (ECCV'2022)
- 4D Association Graph (CVPR'2020)
Supported datasets:
(click to collapse)
- Campus (CVPR'2014)
- Shelf (CVPR'2014)
- CMU Panoptic (ICCV'2015)
- 4D Association (CVPR'2020)
Please see getting_started.md for the basic usage of XRMoCap.
The license of our codebase is Apache-2.0. Note that this license only applies to code in our library, the dependencies of which are separate and individually licensed. We would like to pay tribute to open-source implementations to which we rely on. Please be aware that using the content of dependencies may affect the license of our codebase. Refer to LICENSE to view the full license.
If you find this project useful in your research, please consider cite:
@misc{xrmocap,
title={OpenXRLab Multi-view Motion Capture Toolbox and Benchmark},
author={XRMoCap Contributors},
howpublished = {\url{https://github.com/openxrlab/xrmocap}},
year={2022}
}
We appreciate all contributions to improve XRMoCap. Please refer to CONTRIBUTING.md for the contributing guideline.
XRMoCap is an open source project that is contributed by researchers and engineers from both the academia and the industry. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new models.
- XRPrimer: OpenXRLab foundational library for XR-related algorithms.
- XRSLAM: OpenXRLab Visual-inertial SLAM Toolbox and Benchmark.
- XRSfM: OpenXRLab Structure-from-Motion Toolbox and Benchmark.
- XRLocalization: OpenXRLab Visual Localization Toolbox and Server.
- XRMoCap: OpenXRLab Multi-view Motion Capture Toolbox and Benchmark.
- XRMoGen: OpenXRLab Human Motion Generation Toolbox and Benchmark.
- XRNeRF: OpenXRLab Neural Radiance Field (NeRF) Toolbox and Benchmark.