资源算法translagent

translagent

2019-09-10 | |  99 |   0 |   0

Emergent Translation in Multi-Agent Communication

PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Communication.

We present code for training and decoding both word- and sentence-level models and baselines, as well as preprocessed datasets.

Dependencies

Python

  • Python 2.7

  • PyTorch 0.2

  • Numpy

GPU

  • CUDA (we recommend using the latest version. The version 8.0 was used in all our experiments.)

Related code

Downloading Datasets

The original corpora can be downloaded from (Bergsma500Multi30kMS COCO). For the preprocessed corpora see below.

| | Dataset | | ------------- | ------------- | | Bergsma500 | Data | | Multi30k | Data | | MS COCO | Data |

Before you run the code

  1. Download the datasets and place them in /data/word (Bergsma500) and /data/sentence (Multi30k and MS COCO)

  2. Set correct path in scr_path() from /scr/word/util.py and scr_path()multi30k_reorg_path() and coco_path() from /src/sentence/util.py

Word-level Models

Running nearest neighbour baselines

$ python word/bergsma_bli.py

Running our models

$ python word/train_word_joint.py --l1 <L1> --l2 <L2>

where <L1> and <L2> are any of {en, de, es, fr, it, nl}

Sentence-level Models

Baseline 1 : Nearest neighbour

$ python sentence/baseline_nn.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG>

Baseline 2 : NMT with neighbouring sentence pairs

$ python sentence/nmt.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG> --nn_baseline

Baseline 3 : Nakayama and Nishida, 2017

$ python sentence/train_naka_encdec.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG> --train_enc_how <ENC_HOW> --train_dec_how <DEC_HOW>

where <ENC_HOW> is either two or three, and <DEC_HOW> is either imgdes, or both.

Our models :

$ python sentence/train_seq_joint.py --dataset <DATASET> --task <TASK>

Aligned NMT :

$ python sentence/nmt.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG>

where <DATASET> is multi30k or coco, and <TASK> is either 1 or 2 (only applicable for Multi30k).

Dataset & Related Code Attribution

  • Moses is licensed under LGPL, and Subword-NMT is licensed under MIT License.

  • MS COCO and Multi30k are licensed under Creative Commons.

Citation

If you find the resources in this repository useful, please consider citing:

@inproceedings{Lee:18,
  author    = {Jason Lee and Kyunghyun Cho and Jason Weston and Douwe Kiela},
  title     = {Emergent Translation in Multi-Agent Communication},
  year      = {2018},
  booktitle = {Proceedings of the International Conference on Learning Representations},
}


上一篇:SVHNClassifier

下一篇:sparse-structure-selection

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...