资源算法YOLO-CoreML-MPSNNGraph

YOLO-CoreML-MPSNNGraph

2020-01-14 | |  37 |   0 |   0

YOLO with Core ML and MPSNNGraph

This is the source code for my blog post YOLO: Core ML versus MPSNNGraph.

YOLO is an object detection network. It can detect multiple objects in an image and puts bounding boxes around these objects. Read my other blog post about YOLO to learn more about how it works.

YOLO.jpg

Previously, I implemented YOLO in Metal using the Forge library. Since then Apple released Core ML and MPSNNGraph as part of the iOS 11 beta. So I figured, why not try to get YOLO running on these two other technology stacks too?

In this repo you'll find:

  • TinyYOLO-CoreML: A demo app that runs the Tiny YOLO neural network on Core ML.

  • TinyYOLO-NNGraph: The same demo app but this time it uses the lower-level graph API from Metal Performance Shaders.

  • Convert: The scripts needed to convert the original DarkNet YOLO model to Core ML and MPS format.

To run the app, just open the xcodeproj file in Xcode 9 or later, and run it on a device with iOS 11 or better installed.

The reported "elapsed" time is how long it takes the YOLO neural net to process a single image. The FPS is the actual throughput achieved by the app.

NOTE: Running these kinds of neural networks eats up a lot of battery power. To measure the maximum speed of the model, the setUpCamera() method in ViewController.swift configures the camera to run at 240 FPS, if available. In a real app, you'd use at most 30 FPS and possibly limit the number of times per second it runs the neural net to 15 or less (i.e. only process every other frame).

Tip: Also check out this repo for YOLO v3. It works the same as this repo, but uses the full version of YOLO v3!

iOS 12 and VNRecognizedObjectObservation

The code in my blog post and this repo shows how take the MLMultiArray output from TinyYOLO and interpret it in your app. That was the only way to do it with iOS 11, but as of iOS 12 there is an easier solution.

The Vision framework in iOS 12 directly supports YOLO-like models. The big advantage is that these do the bounding box decoding and non-maximum suppression (NMS) inside the Core ML model. All you need to do is pass in the image and Vision will give you the results as one or more VNRecognizedObjectObservation objects. No more messing around with MLMultiArrays.

It's also really easy to train such models using Turi Create. It combines TinyYOLO v2 and the new NonMaximumSuppression model type into a so-called pipeline model.

The good news is that this new Vision API also supports other object detection models!

I added a chapter to my book Core ML Survival Guide that shows exactly how this works. In the book you’ll see how to add this same functionality to MobileNetV2 + SSDLite, so that you get VNRecognizedObjectObservation predictions for that model too. The book has lots of other great tips on using Core ML, so check it out!

上一篇:YOLO_v3_tutorial_from_scratch

下一篇:CapsNet-Visualization

用户评价
全部评价

热门资源

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • shih-styletransfer

    shih-styletransfer Code from Style Transfer ...