Abstract
In this paper, we propose a deep multi-Task learning model based on Adversarial-and-COoperative
nets (TACO). The goal is to use an adversarial-andcooperative strategy to decouple the task-common
and task-specific knowledge, facilitating the finegrained knowledge sharing among tasks. TACO
accommodates multiple game players, i.e., feature
extractors, domain discriminator, and tri-classifiers.
They play the MinMax games adversarially and
cooperatively to distill the task-common and taskspecific features, while respecting their discriminative structures. Moreover, it adopts a divide-andcombine strategy to leverage the decoupled multiview information to further improve the generalization performance of the model. The experimental
results show that our proposed method significantly
outperforms the state-of-the-art algorithms on the
benchmark datasets in both multi-task learning and
semi-supervised domain adaptation scenarios