资源论文MIXOUT: EFFECTIVE REGULARIZATION TO FINETUNEL ARGE -SCALE PRETRAINED LANGUAGE MODELS

MIXOUT: EFFECTIVE REGULARIZATION TO FINETUNEL ARGE -SCALE PRETRAINED LANGUAGE MODELS

2019-12-31 | |  127 |   83 |   0

Abstract

In natural language processing, it has been observed recently that generalization could be greatly improved by finetuning a large-scale language model pretrained on a large unlabeled corpus. Despite its recent success and wide adoption, finetuning a large pretrained language model on a downstream task is prone to degenerate performance when there are only a small number of training instances available. In this paper, we introduce a new regularization technique, to which we refer as “mixout”, motivated by dropout. Mixout stochastically mixes the parameters of two models. We show that our mixout technique regularizes learning to minimize the deviation from one of the two models and that the strength of regularization adapts along the optimization trajectory. We empirically evaluate the proposed mixout and its variants on finetuning a pretrained language model on downstream tasks. More specifically, we demonstrate that the stability of finetuning and the average accuracy greatly increase when we use the proposed approach to regularize finetuning of BERT on downstream tasks in GLUE.

上一篇:ARE PRE -TRAINED LANGUAGE MODELS AWARE OFP HRASES ?S IMPLE BUT STRONG BASELINES FORG RAMMAR INDUCTION

下一篇:CROSS -LINGUAL ALIGNMENT VS JOINT TRAINING :A COMPARATIVE STUDY AND AS IMPLE UNIFIEDF RAMEWORK

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...