Abstract
Transfer learning transfers knowledge across domains to improve the learning performance. Since
feature structures generally represent the common
knowledge across different domains, they can be
transferred successfully even though the labeling
functions across domains differ arbitrarily. However, theoretical justification for this success has
remained elusive. In this paper, motivated by selftaught learning, we regard a set of bases as a feature structure of a domain if the bases can (approximately) reconstruct any observation in this domain.
We propose a general analysis scheme to theoretically justify that if the source and target domains
share similar feature structures, the source domain
feature structure is transferable to the target domain, regardless of the change of the labeling functions across domains. The transferred structure is
interpreted to function as a regularization matrix
which benefits the learning process of the target domain task. We prove that such transfer enables the
corresponding learning algorithms to be uniformly
stable. Specifically, we illustrate the existence of
feature structure transfer in two well-known transfer learning settings: domain adaptation and learning to learn