资源论文LAMOL: LA NGUAGE MO DELING FORL IFELONG LANGUAGE LEARNING

LAMOL: LA NGUAGE MO DELING FORL IFELONG LANGUAGE LEARNING

2019-12-31 | |  78 |   44 |   0

Abstract

Most research on lifelong learning applies to images or games, but not language. We present LAMOL, a simple yet effective method for lifelong language learning (LLL) based on language modeling. LAMOL replays pseudo-samples of previous tasks while requiring no extra memory or model capacity. Specifically, LAMOL is a language model that simultaneously learns to solve the tasks and generate training samples. When the model is trained for a new task, it generates pseudosamples of previous tasks for training alongside data for the new task. The results show that LAMOL prevents catastrophic forgetting without any sign of intransigence and can perform five very different language tasks sequentially with only one model. Overall, LAMOL outperforms previous methods by a considerable margin and is only 2–3% worse than multitasking, which is usually considered the LLL upper bound. The source code is available at https://github.com/xxx.

上一篇:STRUCT BERT: INCORPORATING LANGUAGE STRUC -TURES INTO PRE -TRAINING FOR DEEP LANGUAGE UN -DERSTANDING

下一篇:AM UTUAL INFORMATION MAXIMIZATION PERSPEC -TIVE OF LANGUAGE REPRESENTATION LEARNING

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...