Abstract. Although deep learning approaches have stood out in recent years due
to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes
added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the
new classes, to update the model—a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach
to learn deep neural networks incrementally, using new data and only a small
exemplar set corresponding to samples from the old classes. This is based on a
loss composed of a distillation measure to retain the knowledge acquired from
the old classes, and a cross-entropy loss to learn the new classes. Our incremental
training is achieved while keeping the entire framework end-to-end, i.e., learning
the data representation and the classifier jointly, unlike recent methods with no
such guarantees. We evaluate our method extensively on the CIFAR-100 and ImageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art
performance