Abstract
While representation learning aims to derive interpretable features for describing visual data, representation
disentanglement further results in such features so that particular image attributes can be identified and manipulated.
However, one cannot easily address this task without observing ground truth annotation for the training data. To address this problem, we propose a novel deep learning model
of Cross-Domain Representation Disentangler (CDRD). By
observing fully annotated source-domain data and unlabeled target-domain data of interest, our model bridges the
information across data domains and transfers the attribute
information accordingly. Thus, cross-domain feature disentanglement and adaptation can be jointly performed. In the
experiments, we provide qualitative results to verify our disentanglement capability. Moreover, we further confirm that
our model can be applied for solving classification tasks of
unsupervised domain adaptation, and performs favorably
against state-of-the-art image disentanglement and translation methods