资源算法AlphaPose

AlphaPose

2019-12-03 | |  43 |   0 |   0

AlphaPose

Alpha Pose is an accurate multi-person pose estimator, which is the first real-time open-source system that achieves 70+ mAP (72.3 mAP) on COCO dataset and 80+ mAP (82.1 mAP) on MPII dataset.** To match poses that correspond to the same person across frames, we also provide an efficient online pose tracker called Pose Flow. It is the first open-source online pose tracker that achieves both 60+ mAP (66.5 mAP) and 50+ MOTA (58.3 MOTA) on PoseTrack Challenge dataset.

News!

  • Apr 2019: MXNet version of AlphaPose is released! It runs at 23 fps on COCO validation set using a single Nvidia 1080Ti GPU!

  • Feb 2019: CrowdPose is integrated into AlphaPose Now!

  • Dec 2018: General version of PoseFlow is released! 3X Faster and support pose tracking results visualization!

  • Sep 2018: PyTorch version of AlphaPose is released! It runs at 20 fps on COCO validation set (4.6 people per image on average) and achieves 71 mAP using a single Nvidia 1080Ti GPU!

Contents

Results

Pose Estimation

Results on COCO test-dev 2015:

MethodAP @0.5:0.95AP @0.5AP @0.75AP mediumAP large
OpenPose (CMU-Pose)61.884.967.557.168.2
Detectron (Mask R-CNN)67.088.073.162.275.6
AlphaPose72.389.279.169.078.6

Results on MPII full test set:

MethodHeadShoulderElbowWristHipKneeAnkleAve
OpenPose (CMU-Pose)91.287.677.766.875.468.961.775.6
Newell & Deng92.189.378.969.876.271.664.777.5
AlphaPose91.390.584.076.480.379.972.482.1

Pose Tracking

posetrack.gifposetrack2.gif   

Results on PoseTrack Challenge validation set:

  1. Task2: Multi-Person Pose Estimation (mAP)

MethodHead mAPShoulder mAPElbow mAPWrist mAPHip mAPKnee mAPAnkle mAPTotal mAP
Detect-and-Track(FAIR)67.570.26251.760.758.749.860.6
AlphaPose66.773.368.361.167.567.061.366.5
  1. Task3: Pose Tracking (MOTA)

MethodHead MOTAShoulder MOTAElbow MOTAWrist MOTAHip MOTAKnee MOTAAnkle MOTATotal MOTATotal MOTPSpeed(FPS)
Detect-and-Track(FAIR)61.765.557.345.754.353.145.755.261.5Unknown
PoseFlow(DeepMatch)59.867.059.851.660.058.450.558.367.88
PoseFlow(OrbMatch)59.066.860.051.859.458.450.358.062.224

Note: Please read PoseFlow/README.md for details.

CrowdPose

crowdpose.gif

Results on CrowdPose Validation:

Compare with state-of-the-art methods

MethodAP @0.5:0.95AP @0.5AP @0.75AR @0.5:0.95AR @0.5AR @0.75
Detectron (Mask R-CNN)57.283.560.365.989.369.4
Simple Pose (Xiao et al.)60.881.465.767.386.371.8
Ours66.084.271.572.789.577.5

Compare with open-source systems

MethodAP @EasyAP @MediumAP @HardFPS
OpenPose (CMU-Pose)62.748.732.35.3
Detectron (Mask R-CNN)69.457.945.82.9
Ours (PyTorch branch)75.566.357.410.1

Note: Please read doc/CrowdPose.md for details.

Installation

Note: For new users or users that are not familiar with TensorFlow or Torch, we suggest using the PyTorch version since it's more user-friendly and runs faster.

  1. Get the code and build related modules.

git clone https://github.com/MVIG-SJTU/AlphaPose.gitcd AlphaPose/human-detection/lib/
make clean
makecd newnms/
makecd ../../../
  1. Install Torch and TensorFlow(verson >= 1.2). After that, install related dependencies by:

chmod +x install.sh
./install.sh
  1. Run fetch_models.sh to download our pre-trained models. Or download the models manually: output.zip(Google drive|Baidu pan), final_model.t7(Google drive|Baidu pan)

chmod +x fetch_models.sh
./fetch_models.sh

Quick Start

  • Demo:  Run AlphaPose for all images in a folder and visualize the results with:

./run.sh --indir examples/demo/ --outdir examples/results/ --vis

The visualized results will be stored in examples/results/RENDER. To easily process images/video and display/save the results, please see doc/run.md. If you get any problems, you can check the doc/faq.md.

  • Video:  You can see our video demo here.

Output

Output (format, keypoint index ordering, etc.) in doc/output.md.

Speeding Up AlphaPose

We provide a fast mode for human-detection that disables multi-scale testing. You can turn it on by adding --mode fast.

And if you have multiple gpus on your machine or have large gpu memories, you can speed up the pose estimation step by using multi-gpu testing or large batch tesing with:

./run.sh --indir examples/demo/ --outdir examples/results/ --gpu 0,1,2,3 --batch 5

It assumes that you have 4 gpu cards on your machine and each card can run a batch of 5 images. Here is the recommended batch size for gpu with different size of memory:

GPU memory: 4GB -- batch size: 3
GPU memory: 8GB -- batch size: 6
GPU memory: 12GB -- batch size: 9

See doc/run.md for more details.

Feedbacks

If you get any problems, you can check the doc/faq.md first. If it can not solve your problems or if you find any bugs, don't hesitate to comment on GitHub or make a pull request!

Contributors

AlphaPose is based on RMPE(ICCV'17), authored by Hao-shu Fang, Shuqin Xie, Yu-Wing Tai and Cewu Lu, Cewu Lu is the corresponding author. Currently, it is developed and maintained by Hao-shu Fang, Jiefeng Li, Yuliang Xiu and Ruiheng Chang.

The main contributors are listed in doc/contributors.md.

Citation

Please cite these papers in your publications if it helps your research:

@inproceedings{fang2017rmpe,
  title={{RMPE}: Regional Multi-person Pose Estimation},
  author={Fang, Hao-Shu and Xie, Shuqin and Tai, Yu-Wing and Lu, Cewu},
  booktitle={ICCV},
  year={2017}
}

@inproceedings{xiu2018poseflow,
  title = {{Pose Flow}: Efficient Online Pose Tracking},
  author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},
  booktitle={BMVC},
  year = {2018}
}

License

AlphaPose is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail at mvig.alphapose[at]gmail[dot]com and cc lucewu[[at]sjtu[dot]edu[dot]cn. We will send the detail agreement to you.


上一篇:DeepPavlov

下一篇:faceswap

用户评价
全部评价

热门资源

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • shih-styletransfer

    shih-styletransfer Code from Style Transfer ...