资源论文UNDERSTANDING AND IMPROVING INFORMATIONT RANSFER IN MULTI -TASK LEARNING

UNDERSTANDING AND IMPROVING INFORMATIONT RANSFER IN MULTI -TASK LEARNING

2020-01-02 | |  69 |   43 |   0

Abstract

We investigate multi-task learning approaches which use a shared feature representation for all tasks. To better understand the transfer of task information, we study an architecture with a shared module for all tasks and a separate output module for each task. We study the theory of this setting on linear and ReLU-activated models. Our key observation is that whether or not tasks’ data are well-aligned can significantly affect the performance of multi-task learning. We show that misalignment between task data can cause negative transfer (or hurt performance) and provide sufficient conditions for positive transfer. Inspired by the theoretical insights, we show that aligning tasks’ embedding layers leads to performance gains for multi-task training and transfer learning on the GLUE benchmark and sentiment analysis tasks; for example, we obtain a 2.35% GLUE score average improvement on 5 GLUE tasks over BERTLARGE using our alignment method. We also design an SVD-based task reweighting scheme and show that it improves the robustness of multi-task training on a multi-label image dataset.

上一篇:TRAINING INDIVIDUALLY FAIR ML MODELS WITHSENSITIVE SUBSPACE ROBUSTNESS

下一篇:LEARNING TO EXPLORE USINGACTIVE NEURAL MAPPING

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...