kaggle-cifar10-torch7
Code for CIFAR-10 competition. http://www.kaggle.com/c/cifar-10
Description | |
---|---|
Model | Very Deep Convolutional Networks with 3x3 kernel [1] |
Data Augmentation | cropping, horizontal reflection [2] and scaling. see lib/data_augmentation.lua |
Preprocessing | Global Contrast Normalization (GCN) and ZCA whitening. see lib/preprocessing.lua |
Training Time | 20 hours on GTX760. |
Prediction Time | 2.5 hours on GTX760. |
Result | 0.93320 (single model). 0.94150 (average 6 models) |
Layer type | Parameters |
---|---|
input | size: 24x24, channel: 3 |
convolution | kernel: 3x3, channel: 64, padding: 1 |
relu | |
convolution | kernel: 3x3, channel: 64, padding: 1 |
relu | |
max pooling | kernel: 2x2, stride: 2 |
dropout | rate: 0.25 |
convolution | kernel: 3x3, channel: 128, padding: 1 |
relu | |
convolution | kernel: 3x3, channel: 128, padding: 1 |
relu | |
max pooling | kernel: 2x2, stride: 2 |
dropout | rate: 0.25 |
convolution | kernel: 3x3, channel: 256, padding: 1 |
relu | |
convolution | kernel: 3x3, channel: 256, padding: 1 |
relu | |
convolution | kernel: 3x3, channel: 256, padding: 1 |
relu | |
convolution | kernel: 3x3, channel: 256, padding: 1 |
relu | |
max pooling | kernel: 2x2, stride: 2 |
dropout | rate: 0.25 |
linear | channel: 1024 |
relu | |
dropout | rate: 0.5 |
linear | channel: 1024 |
relu | |
dropout | rate: 0.5 |
linear | channel: 10 |
softmax |
Ubuntu 14.04
15GB RAM (This codebase can run on g2.2xlarge!)
CUDA (GTX760 or more higher GPU)
Torch7 latest
(This document is outdated. See: Getting started with Torch)
Install CUDA (on Ubuntu 14.04):
apt-get install nvidia-331 apt-get install nvidia-cuda-toolkit
Install Torch7 (see Torch (easy) install):
curl -s https://raw.githubusercontent.com/torch/ezinstall/master/install-all | bash
Install(or upgrade) dependency packages:
luarocks install torch luarocks install nn luarocks install cutorch luarocks install cunn luarocks install https://raw.githubusercontent.com/soumith/cuda-convnet2.torch/master/ccn2-scm-1.rockspec
th cuda_test.lua
Please check your Torch7/CUDA environment when this code fails.
Place the data files into a subfolder ./data.
ls ./data test train trainLabels.csv
th convert_data.lua
th validate.lua
dataset:
train | test |
---|---|
1-40000 | 40001-50000 |
th train.lua th predict.lua
Training with different seed parameter for each nodes.
(same model, same data, different initial weights, different training order)
th train.lua -seed 11 th train.lua -seed 12 ... th train.lua -seed 16
Mount the models
directory for each nodes. for example, ec2/node1
, ec2/node2
, .., ec2/node6
.
Edit the path of model file in predict_averaging.lua
.
Run the prediction command.
th predict_averaging.lua
./nin_model.lua
is an implementation of Network In Network [3].
This model gives score of 0.92400.
My NIN implementation is 2-layer NIN. Its differ from mavenlin's implementation. I tried to implement the mavenlin's 3-layer NIN. However, I did not get good result.
My implementation of 3-layer NIN is here.
global_contrast_normalization
in ./lib/preprocessing.lua
is incorrect implementation (This function is just z-score). but I was using this implementation in the competition.
data augmentation + preprocessing
[1] Karen Simonyan, Andrew Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition", link
[2] Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks", link
[3] Min Lin, Qiang Chen, Shuicheng Yan, "Network In Network", link
[4] R. Collobert, K. Kavukcuoglu, C. Farabet, "Torch7: A Matlab-like Environment for Machine Learning"
下一篇:kaggle-cifar10
还没有评论,说两句吧!
热门资源
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
shih-styletransfer
shih-styletransfer Code from Style Transfer ...
inferno-boilerplate
This is a very basic boilerplate example for pe...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com