资源算法BERT-chinese

BERT-chinese

2020-03-10 | |  37 |   0 |   0

BERT-chinese

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 中文 汉语

requirements

python3

tensorflow >= 1.10

jieba

使用方法

1, 准备数据,参考data文件夹和vocab文件夹,data里空行代表document的分隔

2, 数据处理成tfrecord create_pretraining_data.py

3, 预训练 run_pretraining.py

TODO

实验结果

TODO

TODO LIST

GPU并行训练

License

MIT.


上一篇: bert-token-embeddings

下一篇:BERT-tensorflow

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...