资源论文ARE PRE -TRAINED LANGUAGE MODELS AWARE OFP HRASES ?S IMPLE BUT STRONG BASELINES FORG RAMMAR INDUCTION

ARE PRE -TRAINED LANGUAGE MODELS AWARE OFP HRASES ?S IMPLE BUT STRONG BASELINES FORG RAMMAR INDUCTION

2019-12-31 | |  73 |   41 |   0

Abstract

With the recent success and popularity of pre-trained language models (LMs) in natural language processing, there has been a rise in efforts to understand their inner workings. In line with such interest, we propose a novel method that assists us in investigating the extent to which pre-trained LMs capture the syntactic notion of constituency. Our method provides an effective way of extracting constituency trees from the pre-trained LMs without training. In addition, we report intriguing findings in the induced trees, including the fact that pre-trained LMs outperform other approaches in correctly demarcating adverb phrases in sentences.

上一篇:DATA -DEPENDENT GAUSSIAN PRIOR OBJECTIVEFOR LANGUAGE GENERATION

下一篇:MIXOUT: EFFECTIVE REGULARIZATION TO FINETUNEL ARGE -SCALE PRETRAINED LANGUAGE MODELS

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...