A python viewer for visualizing 3D human poses and facilitating debugging of pose related tasks. This code supplements the following paper:
Proactive Multi-Camera Collaboration for 3D Human Pose Estimation (ICLR 2023)
- Support multi-human and multi-camera.
- Written in python, easy to use and hack.
- Based on Qt, easy to add various interactive widgets.
- Cross-platform support: Windows, Linux and macOS.
Requirements:
- python >= 3.9
- numpy
- pyqtgraph
- PyQt5
- pyopengl
- cupy (optional)
- pyav
- pims
pip install -r requirements.txt- Save your 3d pose data as a numpy data file
.npz. - Save 2D images captured from different views as seperate video files
.mp4. - Open them with the visualizer.
python -m visualize # open the example data under /examples/seq13D data (.npz file)
-
gt3d: GT 3d human pose sequence. Numpy array of shape[t, max_Ngt, j, 3].t: frame id.max_Ngt: max number of gt humans across the whole sequence. Fill zeros for missing humans and joints.[j, 3]: 3D location of j joints. -
pred3d: Predicted 3d human pose sequence. Numpy array of shape[t, max_Npred, j, 3]. max_Npred: max number of pred humans. Fill zeros for missing humans and joints. -
camera (optional): camera location sequence. Numpy array of shape[t, max_c, 5, 3].max_c: max number of cameras in a frame. More detailes can be found here. -
map_center (optional): center of map, used to offset the ground plane. Default to[0, 0, 0].
2D view (optional)
- Video files,
.mp4, .mov, .avi, the same length as 3D data.
If you find this viewer helpful, please cite:
@inproceedings{ci2023proactive,
title={Proactive Multi-Camera Collaboration for 3D Human Pose Estimation},
author={Hai Ci and Mickel Liu and Xuehai Pan and fangwei zhong and Yizhou Wang},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=CPIy9TWFYBG}
}Apache License, Version 2.0.
