BMW-YOLOv3-Inference-API-GPU
This is a repository for an object detection inference API using the Yolov3 Darknet framework.
This repo is based on AlexeyAB darknet repository.
The inference REST API works on GPU. It's supported only on Linux Operating systems.
Models trained using our training Yolov3 repository can be deployed in this API. Several object detection models can be loaded and used at the same time.
Ubuntu 18.04
NVIDIA Drivers (410.x or higher)
Docker CE latest stable release
NVIDIA Docker 2
To check if you have docker-ce installed:
docker --version
To check if you have nvidia-docker installed:
nvidia-docker --version
To check your nvidia drivers version, open your terminal and type the command nvidia-smi
Use the following command to install docker on Ubuntu:
chmod +x install_prerequisites.sh && source install_prerequisites.sh
Install NVIDIA Drivers (410.x or higher) and NVIDIA Docker for GPU by following the official docs
In order to build the project run the following command from the project's root directory:
sudo docker build -t yolov3_inference_api_gpu -f ./docker/dockerfile .
sudo docker build --build-arg http_proxy='' --build-arg https_proxy='' -t yolov3_inference_api_gpu -f ./docker/dockerfile .
To run the API, go to the project's root directory and run the following:
sudo NV_GPU=0 nvidia-docker run -itv $(pwd)/models:/models -p <docker_host_port>:1234 yolov3_inference_api_gpu
The <docker_host_port> can be any unique port of your choice.
The API file will be run automatically, and the service will listen to http requests on the chosen port.
NV_GPU defines on which GPU you want the API to run. If you want the API to run on multiple GPUs just enter multiple numbers seperated by a comma: (NV_GPU=0,1 for example)
To see all available endpoints, open your favorite browser and navigate to:
http://<machine_IP>:<docker_host_port>/docs
The 'predict_batch' endpoint is not shown on swagger. The list of files input is not yet supported.
P.S: If you are using custom endpoints like /load, /detect, and /get_labels, you should always use the /load endpoint first and then use /detect or /get_labels
Loads all available models and returns every model with it's hashed value. Loaded models are stored and aren't loaded again
Performs inference on specified model, image, and returns bounding-boxes
Returns all of the specified model labels with their hashed values
Performs inference on specified model, image, draws bounding boxes on the image, and returns the actual image as response
Lists all available models
Loads the specified model. Loaded models are stored and aren't loaded again
Performs inference on specified model, image, and returns bounding boxes.
Returns all of the specified model labels
Returns the specified model's configuration
Performs inference on specified model and a list of images, and returns bounding boxes
The folder "models" contains subfolders of all the models to be loaded. Inside each subfolder there should be a:
Cfg file (ends with .cfg): contains the configuration of the model
data file: contains number of classes and names file path
Weights file (ends with .weights)
Names file (ends with .names) : contains the names of the classes
Config.json (This is a json file containing information about the model)
{ "inference_engine_name": "yolov3_darknet_detection", "detection_threshold": 0.6, "nms_threshold": 0.45, "hier_threshold": 0.5, "framework": "yolo", "type": "detection", "network": "network_name" }
P.S
detection_threshold, nms_threshold, and hier_threshold values should be between 0 and 1
You can change detection_threshold, nms_threshold, and hier_threshold values while running the API
The API will return bounding boxes with a detection higher than the detection_threshold value. A high detection_threshold can show you only accurate predictions
Windows | Ubuntu | |||
---|---|---|---|---|
NetworkHardware | Intel Xeon CPU 2.3 GHz | Intel Xeon CPU 2.3 GHz | Intel Core i9-7900 3.3 GHZ | GeForce GTX 1080 |
pascalvoc_dataset | 0.885 seconds/image | 0.793 seconds/image | 0.295 seconds/image | 0.0592 seconds/image |
Antoine Charbel, inmind.ai , Beirut, Lebanon
Charbel El Achkar, Beirut, Lebanon
上一篇:YOLOv3_tf
还没有评论,说两句吧!
热门资源
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
shih-styletransfer
shih-styletransfer Code from Style Transfer ...
inferno-boilerplate
This is a very basic boilerplate example for pe...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com