资源算法LASER

LASER

2019-09-11 | |  71 |   0 |   0

LASER Language-Agnostic SEntence Representations

LASER is a library to calculate multilingual sentence embeddings.

Currently, we include an encoder which supports nine European languages: * Germanic languages: English, German, Dutch, Danish * Romanic languages: French, Spanish, Italian, Portuguese * Uralic languages: Finnish

All these languages are encoded by the same BLSTM encoder, and there is no need to specify the input language (but tokenization is language specific). According to our experience, the sentence encoder supports code-switching, i.e. the same sentences can contain words in several different languages.

We have also some evidence that the encoder generalizes somehow to other languages of the Germanic and Romanic language families (e.g. Swedish, Norwegian, Afrikaans, Catalan or Corsican), although no data of these languages was used during training.

A detailed description how the multilingual sentence embeddings are trained can be found in [1,3].

Dependencies

  • Python 3 with NumPy

  • PyTorch 0.40

  • Faiss (for mining bitexts)

  • tokenization from the Moses encoder and byte-pair-encoding

Installation

  • set the environment variable 'LASER' to the root of the installation, e.g. export LASER="${HOME}/projects/laser"

  • download encoders from Amazon s3

  • download third party software

./install_models.sh
./install_external_tools.sh
  • download the data used in the examples tasks (see description for each task)

Applications

We showcase several applications of multilingual sentence embeddings with code to reproduce our results (in the directory "tasks").

For all tasks, we use exactly the same multilingual encoder, without any task specific optimization.

License

This source code is licensed under the license found in the LICENSE file in the root directory of this source tree.

References

[1] Holger Schwenk and Matthijs Douze, Learning Joint Multilingual Sentence Representations with Neural Machine Translation, ACL workshop on Representation Learning for NLP, 2017

@inproceedings{Schwenk:2017:repl4nlp,
  title={Learning Joint Multilingual Sentence Representations with Neural Machine Translation},
  author={Holger Schwenk and Matthijs Douze},
  booktitle={ACL workshop on Representation Learning for NLP},
  year={2017}
}

[2] Holger Schwenk and Xian Li, A Corpus for Multilingual Document Classification in Eight Languages, LREC, pages 3548-3551, 2018.

@InProceedings{Schwenk:2018:lrec_mldoc,
  author = {Holger Schwenk and Xian Li},
  title = {A Corpus for Multilingual Document Classification in Eight Languages},
  booktitle = {LREC},,
  pages = {3548--3551},
  year = {2018}
}

[3] Holger Schwenk, Filtering and Mining Parallel Data in a Joint Multilingual Space, ACL, July 2018

上一篇:3DDFA

下一篇:ENAS-pytorch

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...