资源算法bert_score

bert_score

2020-03-10 | |  38 |   0 |   0

BERTScore

made-with-python PyPI version bert-score Downloads Downloads License: MIT

Automatic Evaluation Metric described in the paper BERTScore: Evaluating Text Generation with BERT (ICLR 2020).

News:

  • Updated to version 0.3.1

    • A new BERTScorer object that caches the model to avoid re-loading it multiple times. Please see our jupyter notebook example for the usage.

    • Supporting multiple reference sentences for each example. The score function now can take a list of lists of strings as the references and return the score between the candidate sentence and its closest reference sentence.

  • Updated to version 0.3.0

    • Supporting Baseline Rescaling: we apply a simple linear transformation to enhance the readability of BERTscore using pre-computed "baselines". It has been pointed out (e.g. by #20, #23) that the numercial range of BERTScore is exceedingly small when computed with RoBERTa models. In other words, although BERTScore correctly distinguish examples through ranking, the numerical scores of good and bad examples are very similar. We detail our approach in a separate post.

  • Updated to version 0.2.3

    • Supporting DistilBERT (Sanh et al.), ALBERT (Lan et al.), and XLM-R (Conneau et al.) models.

    • Including the version of huggingface's transformers in the hash code for reproducibility

  • BERTScore gets accepted in ICLR 2020. Please come to our poster in Addis Ababa, Ethiopia!

  • Updated to version 0.2.2

    • Bug fixed: when using RoBERTaTokenizer, we now set add_prefix_space=True which was the default setting in huggingface's pytorch_transformers (when we ran the experiments in the paper) before they migrated it to transformers. This breaking change in transformers leads to a lower correlation with human evalutation. To reproduce our RoBERTa results in the paper, please use version 0.2.2.

    • The best number of layers for DistilRoBERTa is included

    • Supporting loading a custom model

  • Updated to version 0.2.1

    • SciBERT (Beltagy et al.) models are now included. Thanks to AI2 for sharing the models. By default, we use the 9th layer (the same as BERT-base), but this is not tuned.

  • Our arXiv paper has been updated to v2 with more experiments and analysis.

  • Updated to version 0.2.0

    • Supporting BERT, XLM, XLNet, and RoBERTa models using huggingface's Transformers library

    • Automatically picking the best model for a given language

    • Automatically picking the layer based a model

    • IDF is not set as default as we show in the new version that the improvement brought by importance weighting is not consistent

Authors:

*: Equal Contribution

Overview

BERTScore leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference sentences by cosine similarity. It has been shown to correlate with human judgment on setence-level and system-level evaluation. Moreover, BERTScore computes precision, recall, and F1 measure, which can be useful for evaluating different language generation tasks.

For an illustration, BERTScore precision can be computed as 

If you find this repo useful, please cite:

@inproceedings{bert-score,
  title={BERTScore: Evaluating Text Generation with BERT},
  author={Tianyi Zhang* and Varsha Kishore* and Felix Wu* and Kilian Q. Weinberger and Yoav Artzi},
  booktitle={International Conference on Learning Representations},
  year={2020},
  url={https://openreview.net/forum?id=SkeHuCVFDr}
}

Installation

  • Python version >= 3.6

  • PyTorch version >= 1.0.0

Install from pypi with pip by

pip install bert-score

Install latest unstable version from the master branch on Github by:

pip install git+https://github.com/Tiiiger/bert_score

Install it from the source by:

git clone https://github.com/Tiiiger/bert_scorecd bert_score
pip install .

and you may test your installation by:

python -m unittest discover

Usage

Command Line Interface (CLI)

We provide a command line interface (CLI) of BERTScore as well as a python module. For the CLI, you can use it as follows:

  1. To evaluate English text files:

We provide example inputs under ./example.

bert-score -r example/refs.txt -c example/hyps.txt --lang en

You will get the following output at the end:

roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0) P: 0.957378 R: 0.961325 F1: 0.959333

where "roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)" is the hash code.

Starting from version 0.3.0, we support rescaling the scores with baseline scores

bert-score -r example/refs.txt -c example/hyps.txt --lang en --rescale-with-baseline

You will get:

roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)-rescaled P: 0.747044 R: 0.770484 F1: 0.759045

This makes the range of the scores larger and more human-readable. Please see this post for details.

  1. To evaluate text files in other languages:

We currently support the 104 languages in multilingual BERT (full list).

Please specify the two-letter abbrevation of the language. For instance, using --lang zh for Chinese text.

See more options by bert-score -h.

  1. To load your own custom model: Please specify the path to the model and the number of layers to use by --model and --num_layers.

bert-score -r example/refs.txt -c example/hyps.txt --model path_to_my_bert --num_layers 9
  1. To visualize matching scores:

bert-score-show --lang en -r "There are two bananas on the table." -c "On the table are two apples." -f out.png

The figure will be saved to out.png.

Python Function

For the python module, we provide a demo. Please refer to bert_score/score.py for more details.

Running BERTScore can be computationally intensive (because it uses BERT :p). Therefore, a GPU is usually necessary. If you don't have access to a GPU, you can try our demo on Google Colab

Practical Tips

  • Report the hash code (e.g., roberta-large_L17_no-idf_version=0.2.1) in your paper so that people know what setting you use. This is inspired by sacreBLEU.

  • Unlike BERT, RoBERTa uses GPT2-style tokenizer which creates addition " " tokens when there are multiple spaces appearing together. It is recommended to remove addition spaces by sent = re.sub(r' +', ' ', sent) or sent = re.sub(r's+', ' ', sent).

  • Using inverse document frequency (idf) on the reference sentences to weigh word importance may correlate better with human judgment. However, when the set of reference sentences become too small, the idf score would become inaccurate/invalid. We now make it optional. To use idf, please set --idf when using the CLI tool or idf=True when calling bert_score.score function.

  • When you are low on GPU memory, consider setting batch_size when calling bert_score.score function.

  • To use a particular model please set -m MODEL_TYPE when using the CLI tool or model_type=MODEL_TYPE when calling bert_score.score function.

  • We tune layer to use based on WMT16 metric evaluation dataset. You may use a different layer by setting -l LAYER or num_layers=LAYER

  • Limitation: Because BERT, RoBERTa, and XLM with learned positional embeddings are pre-trained on sentences with max length 512, BERTScore is undefined between sentences longer than 510 (512 after adding [CLS] and [SEP] tokens). The sentences longer than this will be truncated. Please consider using XLNet which can support much longer inputs.

Default Behavior

Default Model

LanguageModel
enroberta-large
en-sciscibert-scivocab-uncased
zhbert-base-chinese
othersbert-base-multilingual-cased

Default Layers

ModelBest LayerMax Length
bert-base-uncased9512
bert-large-uncased18512
bert-base-cased-finetuned-mrpc9512
bert-base-multilingual-cased9512
bert-base-chinese8512
roberta-base10512
roberta-large17512
roberta-large-mnli19512
roberta-base-openai-detector7512
roberta-large-openai-detector19512
xlnet-base-cased51000000000000
xlnet-large-cased71000000000000
xlm-mlm-en-20487512
xlm-mlm-100-128011512
scibert-scivocab-uncased9*512
scibert-scivocab-cased9*512
scibert-basevocab-uncased9*512
scibert-basevocab-cased9*512
distilroberta-base5512
distilbert-base5512
distilbert-base-uncased5512
distilbert-base-uncased-distilled-squad4512
distilbert-base-multilingual-cased5512
albert-base-v110512
albert-large-v117512
albert-xlarge-v116512
albert-xxlarge-v18512
albert-base-v29512
albert-large-v214512
albert-xlarge-v213512
albert-xxlarge-v28512
xlm-roberta-base9512
xlm-roberta-large17512

*: Not tuned

Acknowledgement

This repo wouldn't be possible without the awesome bertfairseq, and transformers.


上一篇:AzureML-BERT

下一篇:ABSA-BERT-pair

用户评价
全部评价

热门资源

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...