资源算法QANet_keras

QANet_keras

2020-02-20 | |  32 |   0 |   0

QANet in keras

QANet: https://arxiv.org/abs/1804.09541

This keras model refers to QANet in tensorflow (https://github.com/NLPLearn/QANet).

I find that the conv based multi-head attention in tensor2tensor (https://github.com/NLPLearn/QANet/blob/master/layers.py) performs 3%~4% better than the multiplying matrices based one in (https://github.com/bojone/attention/blob/master/attention_keras.py).

Pipline

  1. Download squad data dev-v1.1.json and train-v1.1.json from (https://rajpurkar.github.io/SQuAD-explorer/) to the folder ./original_data.

  2. Download glove.840B.300d.txt from (https://nlp.stanford.edu/projects/glove/) to the folder ./original_data.

  3. Run python preprocess.py to get the wordpiece based preprocessed data.

  4. Run python train_QANet.py to start training.

Updates

  •  Add EMA (with about 3% improvement)

  •  Add multi gpu (speed up)

  •  Support adding handcraft features

  •  Revised the MultiHeadAttention and PositionEmbedding in keras

  •  Support parallel multi-gpu training and inference

  •  Add layer dropout and revise the dropout bug (with about 2% improvement)

  •  Update the experimental results and related hyper-parameters (in train_QANet.py)

  •  Revise the output Layer QAoutputBlock.py(with about 1% improvement)

  •  Replace the BatchNormalization with the LayerNormalization in layer_norm.py(about 0.5% improvement)

  •  Add slice operation to QANet (double speed up)

  •  Add Cove (about 1.4% improvement)

  •  Implement the EMA in keras-gpu. (30% spped up)

  •  Add WordPiece in keras (from BERT) (0.5% improvement)

  •  Add data augmentation

I find that EMA in keras is hard to implement with GPU, and the training speed is greatly affected by it in keras. Besides, it's hard to add the slice op in keras too, so the training speed is further slower(cost about twice as much time compared with the optimized tensorflow version...).

Now, the gpu-version EMA can work perporly in keras.

Results

All models are set in 8 heads, 128 filters.

settingepochEM/F1
batch_size=241166.24% / 76.75%
batch_size=24 + ema_decay=0.99991469.51% / 79.13%
batch_size=24 + ema_decay=0.9999 + wordpiece1770.07% / 79.52%
batch_size=24 + ema_decay=0.9999 + wordpiece + Cove1371.48% / 80.85%


上一篇:QANet_dureader

下一篇:DuReader_QANet_BiDAF

用户评价
全部评价

热门资源

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • shih-styletransfer

    shih-styletransfer Code from Style Transfer ...