资源论文Joint and Coupled Bilingual Topic Model Based Sentence Representations for Language Model Adaptation

Joint and Coupled Bilingual Topic Model Based Sentence Representations for Language Model Adaptation

2019-11-08 | |  66 |   37 |   0
Abstract This paper is concerned with data selection for adapting language model (LM) in statistical machine translation (SMT), and aims to find the LM training sentences that are topic similar to the translation task. Although the traditional approaches have gained significant performance, they ignore the topic information and the distribution information of words when selecting similar training sentences. In this paper, we present two bilingual topic model (BLTM) (joint and coupled BLTM) based sentence representations for cross-lingual data selection. We map the data selection task into cross-lingual semantic representations that are language independent, then rank and select sentences in the target language LM training corpus for a sentence in the translation task by the semanticsbased likelihood. The semantic representations are learned from the parallel corpus, with the assumption that the bilingual pair shares the same or similar distribution over semantic topics. Largescale experimental results demonstrate that our approaches significantly outperform the state-of-theart approaches on both LM perplexity and translation performance, respectively.

上一篇:Personalized Diagnosis for Over-Constrained Problems ?

下一篇:Integrating Syntactic and Semantic Analysis into the Open Information Extraction Paradigm

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...