资源算法MemN2N

MemN2N

2019-09-19 | |  56 |   0 |   0

End-To-End Memory Networks in MemN2N

MXNet implementation of End-To-End Memory Networks for language modelling. The original Tensorflow code from carpedm20 can be found here.

alt tag

Known issue: SGD does not converge, ADAM converges but is not able to reach a good result(details).

Setup

This code requires MXNet. Also, it uses CUDA to run on GPU for faster training. There is a set of sample Penn Tree Bank (PTB) corpus in data directory, which is a popular benchmark for measuring quality of these models. But you can use your own text data set which should be formated like this.

Usage

To train a model with 6 hops and memory size of 100, run the following command:

$ python train.py --nhop 6 --mem_size 100

To see all training options, run:

$ python train.py --help

To test a model, run the script file test.py like:

$ python test.py --network checkpoint/memnn-symbol.json --params checkpoint/memnn-0100.params --gpus 0


上一篇:LightCNN

下一篇:deformable-conv

用户评价
全部评价

热门资源

  • TensorFlow-Course

    This repository aims to provide simple and read...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • mxnet_VanillaCNN

    This is a mxnet implementation of the Vanilla C...

  • tensorflow-sketch...

    Discrlaimer: This is not an official Google pro...

  • vsepp_tensorflow

    Improving Visual-Semantic Embeddings with Hard ...