资源算法MNN-yolov3

MNN-yolov3

2020-01-02 | |  49 |   0 |   0

MNN-yolov3

Introduction

MNN demo of YOLOv3(converted from Stronger-Yolo).

Quick Start (cpp)

  1. Install MNN following the corresponding guide.

  2. Setup an environment following Stronger-Yolo.

  3. run v3/pb.py to convert tensorflow checkpoint into portable model.

  4. (optional) Fold constants using TF tools. (Recommended by MNN.)

    bazel-bin/tensorflow/tools/graph_transforms/transform_graph --transforms=fold_constants(ignore_errors=true)
  5. Converting model (remember to build convert tools first)

    cd {MNN dir}/tools/converter/build/
    ./MNNConvert -f TF --modelFile {MNN-yolov3 project dir}/v3/port/coco544.pb --MNNModel coco544.mnn --bizCode MNN
  6. Copy MNN-demo/yolo.cpp in to {MNN dir}/demo/exec and  Modify {MNN dir}/demo/exec/CmakeLists.txt like MNN-demo/CmakeLists.txt.

  7. Run cpp execution.

Quick Start (python) Update: 2019-9-28

  1. Install MNN-python following the corresponding guide.

  2. Setup an environment following Stronger-Yolo.

  3. run v3/pb.py to convert tensorflow checkpoint into portable model.

  4. (optional) Fold constants using TF tools. (Recommended by MNN.)

    bazel-bin/tensorflow/tools/graph_transforms/transform_graph --transforms=fold_constants(ignore_errors=true)
  5. Converting model (remember to build convert tools first)

    mnnconvert -f TF --modelFile voc544.pb --MNNModel voc544_python.mnn
  6. A python demo is under MNN-demo/demo.py

Quantitative Analysis

Note:
1.Inference time is tested using MNN official Test Tool with scorethreshold 0.2 And 0.7849 is the original tensorflow result.
2.All MAP results are evaluated using the first 300 testing images in order to save time.
3.-quant model is quantized using official MNN tool. The poor inference speed is due to arm-specified optimization. Check this.

ModelInputSizeThreadInference(ms)ParamsMAP(VOC)
Yolov35442/4112/75.126M0.7803(0.7849)
Yolov33202/438.6/24.226M0.7127(0.7249)
Yolov3-quant3202/4316.2/225.26.7M0.7082(0.7249)

Important Notes during model converting

  1. Replace v3/model/head/build_nework with build_nework_MNN, which replaces tf.shape with static inputshape and replace

    [:, tf.newaxis] -> tf.expand_dims // currently strided_slice op is not very well supported in MNN.

2. Following this issue to remove/replace some op.
3. Remove condition op which is related to BatchNormalization and training Flag. Otherwise it will cause MNN converting failure.Identity's input node num. != 1

Update: 2019-9-24
Don't bother to adjust op carefully. Just follow this to replace nn.batch_normalization with nn.fused_batch_norm. After this modification we can also merge BN,Relu into convolution directly in MNN.

Qualitative Comparison

  • Testing Result in Tensorflow(top), MNN(middle), and Android phone(bottom).
    Result of TensorflowResult of TensorflowResult of Tensorflow

TODO

  • Speed analyse.

  • Model Quantization.

  • Op Integration. (BN,Relu->Convolution)

  • Android Support.

  • Channel Pruning/ Weight Sparsification  ...  (Update: 2019-10-26 see stronger-yolo-pytorch for more detail)

Reference

stronger-yolo

MNN

NCNN


上一篇:MNN2017

下一篇:GDP-KPRN

用户评价
全部评价

热门资源

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • shih-styletransfer

    shih-styletransfer Code from Style Transfer ...