资源算法CIFAR10-Kaggle

CIFAR10-Kaggle

2020-01-17 | |  35 |   0 |   0

CIFAR10-Kaggle

Code for Kaggle competition: CIFAR-10

Dataset has 10 classes. All images are 32x32.

Preprocessing: I extracted the RGB vector from each image and turned it into a 32x32x3 numpy array. Then I linked each image's RGB numpy array with its label(which is stored in a csv file)

Defining model: For this project, I am using tensorflow's conv2D model. Paddling is used to slow down dimensionality reduction as more layers are added. Maxpooling, batch normalization, and dropout layer are also implemented in the model. The model ends with two fully connected layers and one final layer with softmax activation function.

Result: The model has over 95% accuracy on training data without the dropout layer and around 91% accuracy with one dropout layer. However, the accuracy on testing data is super low: 74%. Is this overfitting or could this be caused by undesirable train/test split(training data contains insufficient number of samples on some classes which makes the model not fit to recognize those classes.) Next step is to find out the cause of this low accuracy.


上一篇: kaggle-cifar10-extract

下一篇:pnn.pytorch.update

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...