chainer-pose-proposal-net
This is an (unofficial) implementation of Pose Proposal Networks with Chainer including training and prediction tools.
Copyright (c) 2018 Idein Inc. & Aisin Seiki Co., Ltd. All rights reserved.
This project is licensed under the terms of the license.
Prior to training, let's download dataset. You can train with MPII or COCO dataset by yourself.
For simplicity, we will use docker image of idein/chainer which includes Chainer, ChainerCV and other utilities with CUDA driver. This will save time setting development environment.
For more information see:
If you train with COCO dataset you can skip.
Access MPII Human Pose Dataset and jump to Download
page. Then download and extract both Images (12.9 GB)
and Annotations (12.5 MB)
at ~/work/dataset/mpii_dataset
for example.
mpii.json
We need decode mpii_human_pose_v1_u12_1.mat
to generate mpii.json
. This will be used on training or evaluating test dataset of MPII.
$ sudo docker run --rm -v $(pwd):/work -v path/to/dataset:mpii_dataset -w /work idein/chainer:4.5.0 python3 convert_mpii_dataset.py mpii_dataset/mpii_human_pose_v1_u12_2/mpii_human_pose_v1_u12_1.mat mpii_dataset/mpii.json
It will generate mpii.json
at path/to/dataset
. Where path/to/dataset
is the root directory of MPII dataset, for example, ~/work/dataset/mpii_dataset
. For those who hesitate to use Docker, you may edit config.ini
as necessary.
If you train with MPII dataset you can skip.
Access COCO dataset and jump to Dataset
-> download
. Then download and extract 2017 Train images [118K/18GB]
, 2017 Val images [5K/1GB]
and 2017 Train/Val annotations [241MB]
at ~/work/dataset/coco_dataset:/coco_dataset
for example.
OK let's begin!
$ cat begin_train.sh cat config.ini docker run --rm -v $(pwd):/work -v ~/work/dataset/mpii_dataset:/mpii_dataset -v ~/work/dataset/coco_dataset:/coco_dataset --name ppn_idein -w /work idein/chainer:5.1.0 python3 train.py $ sudo bash begin_train.sh
Optional argument --runtime=nvidia
maybe require for some environment.
It will train a model the base network is MobileNetV2 with MPII dataset located in path/to/dataset
on host machine.
If we would like to train with COCO dataset, edit a part of config.ini
as follow:
before
# parts of config.ini [dataset] type = mpii
after
# parts of config.ini [dataset] type = coco
We can choice ResNet based network as the original paper adopts. Edit a part of config.ini
as follow:
before
[model_param] model_name = mv2
after
[model_param] # you may also choice resnet34 and resnet50 model_name = resnet18
Very easy, all we have to do is, for example:
$ sudo bash run_predict.sh ./trained
If you would like to configure parameter or hide bounding box, edit a part of config.ini
as follow:
[predict] # If `False` is set, hide bbox of annotation other than human instance. visbbox = True # detection_thresh detection_thresh = 0.15 # ignore human its num of keypoints is less than min_num_keypoints min_num_keypoints= 1
We tested on an Ubuntu 16.04 machine with GPU GTX1080(Ti)
We will build OpenCV from source to visualize the result on GUI.
$ cd docker/gpu $ cat build.sh docker build -t ppn . $ sudo bash build.sh
Here is an result of ResNet18 trained with COCO running on laptop PC.
Set your USB camera that can recognize from OpenCV.
Run video.py
$ python video.py ./trained
or
$ sudo bash run_video.sh ./trained
To use feature of Static Subgraph Optimizations to accelerate inference speed, we should install Chainer 5.y.z and CuPy 5.y.z e.g. 5.0.0 or 5.1.0 .
Prepare high performance USB camera so that takes more than 60 FPS.
Run high_speed.py
instead of video.py
Do not fall from the chair with surprise :D.
Without training, you can try our software by downloading pre-trained model from our release page
Implementation of Pose Proposal Networks (NotePC with e-GPU)
It runs on Raspberry Pi 3 locally using its GPU (VideoCore IV) with almost 10 FPS.
It also runs on Raspberry Pi Zero with 6.6 FPS.
We have released an IoT platform service named Actcast.
You can reproduce our demo and feel how fast it is on YOUR Raspberry Pi using Actcast.
It is free of charge alpha release. Please give it a try! For more information see:
日本語ページはこちら
Please cite the paper in your publications if it helps your research:
@InProceedings{Sekii_2018_ECCV, author = {Sekii, Taiki}, title = {Pose Proposal Networks}, booktitle = {The European Conference on Computer Vision (ECCV)}, month = {September}, year = {2018} }
还没有评论,说两句吧!
热门资源
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
capsnet-with-caps...
CapsNet with capsule-wise convolution Project ...
inferno-boilerplate
This is a very basic boilerplate example for pe...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com