资源算法YOLOV2-Tensorflow-2.0

YOLOV2-Tensorflow-2.0

2020-02-14 | |  29 |   0 |   0

图片.png

YOLO V2 with TensorFlow 2.0

Here is a jupyter notebook featuring a complete implementation from scratch of YOLOV2 with TensorFlow 2.0 :

  • Dataset pipeline with data augmentation

  • Training from YOLO pretrained weights

  • Visualization of object detection

I use this notebook to train a model to detect crop and weeds in a field. The goal is to detect crop in real time for tractor guidance and to detect weeds to remove them.

Original paper : YOLO9000: Better, Faster, Stronger by Joseph Redmond and Ali Farhadi.

Files

  • Yolo_V2_tf_2.ipynb : Yolo V2 implementation with Tensorflow 2.0

  • Yolo_V2_tf_eager.ipynb : old notebook, Yolo V2 implementation with Tensorflow 1.x with eager execution

Requirements

  • tensorflow 2.0

  • imgaug

  • cv2

Before using the notebook

  • Download pretrained weights here. Place this weights file in notebook directory and name it yolo.weights

  • The training requires four directories containing images and annotations :

train_image_folder/ : contains images files used during training (png format)

train_annot_folder/ : contains annotations in PASCAL VOC format (one xml file for each image)

val_image_folder/ : contains images files used for validation

val_annot_folder/ : contains annotations in PASCAL VOC format

Using the notebook

  • Define object's labels to detect (same labels as defined in PASCAL VOC xml file). Example :

	LABELS           = ('sugarbeet', 'weed')
  • Define image size in dataset and YOLO grid size. Image size must be YOLO grid size * 32.

	IMAGE_H, IMAGE_W = 512, 512
	GRID_H,  GRID_W  = 16, 16 # GRID size = IMAGE size / 32
  • Define train batch size and validation batch size : depends on image size and video card RAM.

	TRAIN_BATCH_SIZE = 10
	VAL_BATCH_SIZE   = 10
  • Define path to dataset directories.

	# Train and validation directories

	train_image_folder = 'data/train/image/'
	train_annot_folder = 'data/train/annotation/'
	val_image_folder = 'data/val/image/'
	val_annot_folder = 'data/val/annotation/'

That's it, just run notebook cells to train YOLO on your own data!

Example of use

YOLO model trained on sugarbeet and weed dataset (two labels) :

Credits

Many thanks to these great repositories:

https://github.com/experiencor/keras-yolo2

https://github.com/allanzelener/YAD2K

and to this very good explanation of the YOLO V2 loss function:

https://fairyonice.github.io/Part_4_Object_Detection_with_Yolo_using_VOC_2012_data_loss.html


上一篇:Yolov2-tiny-tf-NCS

下一篇:YOLOv2-Training

用户评价
全部评价

热门资源

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • shih-styletransfer

    shih-styletransfer Code from Style Transfer ...