资源论文BAM! Born-Again Multi-Task Networks for Natural Language Understanding

BAM! Born-Again Multi-Task Networks for Natural Language Understanding

2019-09-20 | |  133 |   55 |   0

Abstract It can be challenging to train multi-task neural networks that outperform or even match their single-task counterparts. To help address this, we propose using knowledge distillation where single-task models teach a multi-task model. We enhance this training with teacher annealing, a novel method that gradually transitions the model from distillation to supervised learning, helping the multi-task model surpass its single-task teachers. We evaluate our approach by multi-task fifine-tuning BERT on the GLUE benchmark. Our method consistently improves over standard single-task and multi-task training

上一篇:Automatic Grammatical Error Correction for Sequence-to-sequence Text Generation: An Empirical Study

下一篇:Boosting Dialog Response Generation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...