Knowledge-distillation-using-BERT
Using Google's pretrained language model, BERT, I aim to train a smaller bidirectional LSTM model with knowledge distillation. The kaggle dataset from Toxic Comment classification is used for this task.
上一篇:Bert-Summarization-with-Flask
下一篇:Keras-Bert-Kbqa
还没有评论,说两句吧!
热门资源
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
capsnet-with-caps...
CapsNet with capsule-wise convolution Project ...
inferno-boilerplate
This is a very basic boilerplate example for pe...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com