bert-embedding
BERT, published by Google, is new way to obtain pre-trained language model word representation. Many NLP tasks are benefit from BERT to get the SOTA.
The goal of this project is to obtain the token embedding from BERT's pre-trained model. In this way, instead of building and do fine-tuning for an end-to-end NLP model, you can build your model by just utilizing or token embedding.
This project is implemented with @MXNet. Special thanks to @gluon-nlp team.
pip install bert-embedding # If you want to run on GPU machine, please install `mxnet-cu92`. pip install mxnet-cu92
from bert_embedding import BertEmbedding bert_abstract = """We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%."""sentences = bert_abstract.split('n') bert_embedding = BertEmbedding() result = bert_embedding(sentences)
If you want to use GPU, please import mxnet and set context
import mxnet as mxfrom bert_embedding import BertEmbedding...ctx = mx.gpu(0) bert = BertEmbedding(ctx=ctx)
This result is a list of a tuple containing (tokens, tokens embedding)
For example:
first_sentence = result[0] first_sentence[0]# ['we', 'introduce', 'a', 'new', 'language', 'representation', 'model', 'called', 'bert', ',', 'which', 'stands', 'for', 'bidirectional', 'encoder', 'representations', 'from', 'transformers']len(first_sentence[0])# 18len(first_sentence[1])# 18first_token_in_first_sentence = first_sentence[1] first_token_in_first_sentence[1]# array([ 0.4805648 , 0.18369392, -0.28554988, ..., -0.01961522,# 1.0207764 , -0.67167974], dtype=float32)first_token_in_first_sentence[1].shape# (768,)
There are three ways to handle oov, avg (default), sum, and last. This can be specified in encoding.
...bert_embedding = BertEmbedding() bert_embedding(sentences, 'sum')...
book_corpus_wiki_en_uncased | book_corpus_wiki_en_cased | wiki_multilingual | wiki_multilingual_cased | wiki_cn | |
---|---|---|---|---|---|
bert_12_768_12 | ✓ | ✓ | ✓ | ✓ | ✓ |
bert_24_1024_16 | x | ✓ | x | x | x |
Example of using the large pre-trained BERT model from Google
from bert_embedding import BertEmbedding bert_embedding = BertEmbedding(model='bert_24_1024_16', dataset_name='book_corpus_wiki_en_cased')
Source: gluonnlp
还没有评论,说两句吧!
热门资源
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
capsnet-with-caps...
CapsNet with capsule-wise convolution Project ...
inferno-boilerplate
This is a very basic boilerplate example for pe...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com