Abstract
Visual Domain Adaptation is a problem of immense importance in computer vision. Previous approaches showcase the inability of even deep neural networks to learn informative representations across domain shift. This problem is more severe for tasks where acquiring hand labeled
data is extremely hard and tedious. In this work, we focus
on adapting the representations learned by segmentation
networks across synthetic and real domains. Contrary to
previous approaches that use a simple adversarial objective
or superpixel information to aid the process, we propose
an approach based on Generative Adversarial Networks
(GANs) that brings the embeddings closer in the learned
feature space. To showcase the generality and scalability of
our approach, we show that we can achieve state of the art
results on two challenging scenarios of synthetic to real domain adaptation. Additional exploratory experiments show
that our approach: (1) generalizes to unseen domains and
(2) results in improved alignment of source and target distributions