For training-time synchronous validation. Run CUDA_VISIBLE_DEVICES=<DEVICES_ID> python val_per_epoch.py
Validation for a specific checkpoint. Run python val_prune_oneepoch.py --labels <COCO_HOME>/annotations/person_keypoints_val2017.json --images-folder <COCO_HOME>/val2017 --checkpoint-path <CHECKPOINT>
Demo
For a simple demo. Run python demo.py --checkpoint-path ./fine-tuned_models/<CHECKPOINT> --images <YOUR_IMAGE>
Pruned model
We provide two pruned models with different compression rate: ./pruned_models/0.3.pth.tar (reduce 15.92% flops) and ./pruned_models/0.8.tar.pth (reduce 25.6% flops).
Fine-tuned model
The model fine-tuned from the pruned model ./pruned_models/0.3.pth.tar is available in ./fine-tuned_models/.
Unpruned pre-trained model
The model expects normalized image (mean=[128, 128, 128], scale=[1/256, 1/256, 1/256]) in planar BGR format. Pre-trained on COCO model is available at: ./pre-trained_models/checkpoint_iter_370000.pth.tar, it has 40% of AP on COCO validation set (38.6% of AP on the val subset).