资源论文Generalized Tuning of Distributional Word Vectors for Monolingual and Cross-Lingual Lexical Entailment

Generalized Tuning of Distributional Word Vectors for Monolingual and Cross-Lingual Lexical Entailment

2019-09-20 | |  96 |   48 |   0 0 0
Abstract Lexical entailment (LE; also known as hyponymy-hypernymy or is-a relation) is a core asymmetric lexical relation that supports tasks like taxonomy induction and text generation. In this work, we propose a simple and effective method for fine-tuning distributional word vectors for LE. Our Generalized Lexical ENtailment model (GLEN) is decoupled from the word embedding model and applicable to any distributional vector space. Yet – unlike existing retrofitting models – it captures a general specialization function allowing for LE-tuning of the entire distributional space and not only the vectors of words seen in lexical constraints. Coupled with a multilingual embedding space, GLEN seamlessly enables cross-lingual LE detection. We demonstrate the effectiveness of GLEN in graded LE and report large improvements (over 20% in accuracy) over state-ofthe-art in cross-lingual LE detection

上一篇:GCDT: A Global Context Enhanced Deep Transition Architecture for Sequence Labeling

下一篇:Generating Fluent Adversarial Examples for Natural Languages

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...