Penalizing Top Performers: Conservative Loss
for Semantic Segmentation Adaptation
Abstract. Due to the expensive and time-consuming annotations (e.g.,
segmentation) for real-world images, recent works in computer vision resort to synthetic data. However, the performance on the real image often
drops significantly because of the domain shift between the synthetic
data and the real images. In this setting, domain adaptation brings an
appealing option. The effective approaches of domain adaptation shape
the representations that (1) are discriminative for the main task and (2)
have good generalization capability for domain shift. To this end, we
propose a novel loss function, i.e., Conservative Loss, which penalizes
the extreme good and bad cases while encouraging the moderate examples. More specifically, it enables the network to learn features that are
discriminative by gradient descent and are invariant to the change of domains via gradient ascend method. Extensive experiments on synthetic to
real segmentation adaptation show our proposed method achieves state of
the art results. Ablation studies give more insights into properties of the
Conservative Loss. Exploratory experiments and discussion demonstrate
that our Conservative Loss has good flexibility rather than restricting an
exact form