Abstract
Homotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis. In this work, we propose a novel homotopy-based numerical method that can be used to gradually transfer optimized weights of a neural network across different data distributions. This method generalizes the widely-used heuristic of pre-training weights on one dataset and then fine-tuning them on another dataset of interest. We conduct a theoretical analysis showing that, under some assumptions, the homotopy method combined with Stochastic Gradient Descent (SGD) is guaranteed to converge to an -optimal solution for a target task when started from an -optimal solution on a source task. Empirical evaluations on a toy regression dataset and for transferring optimized weights from MNIST to Fashion-MNIST and CIFAR-10 show up to two orders of magnitude speedup over random initialization and substantial improvements over pre-training.