资源论文Deep Multi-Task Learning with Adversarial-and-Cooperative Nets

Deep Multi-Task Learning with Adversarial-and-Cooperative Nets

2019-10-09 | |  99 |   49 |   0
Abstract In this paper, we propose a deep multi-Task learning model based on Adversarial-and-COoperative nets (TACO). The goal is to use an adversarial-andcooperative strategy to decouple the task-common and task-specific knowledge, facilitating the finegrained knowledge sharing among tasks. TACO accommodates multiple game players, i.e., feature extractors, domain discriminator, and tri-classifiers. They play the MinMax games adversarially and cooperatively to distill the task-common and taskspecific features, while respecting their discriminative structures. Moreover, it adopts a divide-andcombine strategy to leverage the decoupled multiview information to further improve the generalization performance of the model. The experimental results show that our proposed method significantly outperforms the state-of-the-art algorithms on the benchmark datasets in both multi-task learning and semi-supervised domain adaptation scenarios

上一篇:DARec: Deep Domain Adaptation for Cross-Domain Recommendation via Transferring Rating Patterns

下一篇:Differentially Private Iterative Gradient Hard Thresholding for Sparse Learning

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...