Abstract. Unsupervised domain adaptation has caught appealing attentions as
it facilitates the unlabeled target learning by borrowing existing well-established
source domain knowledge. Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain to
better solve cross-domain distribution divergences. However, existing approaches
separate target label optimization and domain-invariant feature learning as different steps. To address that issue, we develop a novel Graph Adaptive Knowledge
Transfer (GAKT) model to jointly optimize target labels and domain-free features
in a unified framework. Specifically, semi-supervised knowledge adaptation and
label propagation on target data are coupled to benefit each other, and hence the
marginal and conditional disparities across different domains will be better alleviated. Experimental evaluation on two cross-domain visual datasets demonstrates
the effectiveness of our designed approach on facilitating the unlabeled target
task learning, compared to the state-of-the-art domain adaptation approaches