资源算法MemN2N

MemN2N

2019-09-19 | |  78 |   0 |   0

End-To-End Memory Networks in MemN2N

MXNet implementation of End-To-End Memory Networks for language modelling. The original Tensorflow code from carpedm20 can be found here.

alt tag

Known issue: SGD does not converge, ADAM converges but is not able to reach a good result(details).

Setup

This code requires MXNet. Also, it uses CUDA to run on GPU for faster training. There is a set of sample Penn Tree Bank (PTB) corpus in data directory, which is a popular benchmark for measuring quality of these models. But you can use your own text data set which should be formated like this.

Usage

To train a model with 6 hops and memory size of 100, run the following command:

$ python train.py --nhop 6 --mem_size 100

To see all training options, run:

$ python train.py --help

To test a model, run the script file test.py like:

$ python test.py --network checkpoint/memnn-symbol.json --params checkpoint/memnn-0100.params --gpus 0


上一篇:LightCNN

下一篇:deformable-conv

用户评价
全部评价

热门资源

  • DuReader_QANet_BiDAF

    Machine Reading Comprehension on DuReader Usin...

  • My_DrQA

    My_DrQA A re-implement DrQA based on Pytorch

  • ETD_cataloguing_a...

    ETD catalouging project using allennlp

  • allennlp_extras

    allennlp_extras Some utilities build on top of...

  • allennlp-dureader

    An Apache 2.0 NLP research library, built on Py...