Borrowing Treasures from the Wealthy: Deep Transfer Learning through
Selective Joint Fine-Tuning
Abstract
Deep neural networks require a large amount of labeled
training data during supervised learning. However, collecting and labeling so much data might be infeasible in many
cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient
training data. In this scheme, a target learning task with
insufficient training data is carried out simultaneously with
another source learning task with abundant training data.
However, the source learning task does not use all existing
training data. Our core idea is to identify and use a subset
of training images from the original source learning task
whose low-level characteristics are similar to those from
the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute
descriptors from linear or nonlinear filter bank responses
on training images from both tasks, and use such descriptors to search for a desired subset of training samples for
the source learning task.
Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training
data for deep learning. Such tasks include Caltech 256,
MIT Indoor 67, and fine-grained classification problems
(Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy
by 2% - 10% using a single model. Codes and models are available at https://github.com/ZYYSzj/
Selective-Joint-Fine-tuning