资源算法flair-pos-tagging

flair-pos-tagging

2020-04-09 | |  37 |   0 |   0

Contextualized String Embeddings for PoS Tagging: A Multilingual Evaluation

Contextualized string embeddings as proposed by Akbik et al. (2018) are a recent type of contextualized word embedding that are based on character-level language models. They were shown to yield state-of-the-art results in many named entity recognition tasks. However, their multilingual performance on part-of-speech tagging tasks has only received little attention.

In this repository, we conduct an extensive evaluation for sequence tagging on the Universal Dependencies project. We show that contextualized string embeddings outperform the state-of-the-art neural network approaches like BiLSTMs or deep bidirectional encoder representations from transformers (BERT) for PoS tagging, resulting in a new state-of-the-art.

Changelog

  • 14.09.2019: Initial version released. Training and evaluation scripts added.

Datasets

We train and evaluate PoS tagging models on 21 languages from the Universal Dependencies project. The following table shows an overview of training, development and test dataset sizes for each language:

Language# Train# Dev# Test
Bulgarian (bg)8,9071,1151,116
Czech (cs)68,4959,27010,148
Danish (da)4,868322322
German (de)14,118799977
English (en)12,5432,0022,077
Spanish (es)14,1871,552274
Basque (eu)5,3961,7981,799
Persian (fa)4,798599600
Finnish (fi)12,217716648
French (fr)14,5521,596298
Hebrew (he)5,241484491
Hindi (hi)13,3041,6591,684
Croatian (hr)3,557200200
Indonesian (id)4,477559557
Italian (it)11,699489489
Dutch (nl)13,000349386
Norwegian (no)15,6962,4101,939
Polish (pl)6,800700727
Portuguese (pt)8,800271288
Slovenian (sl)6,471735790
Swedish (sv)4,3035041,219

The next table gives an overview of languages and language families that were used in our experiments - grouping is reproduced from Plank et al. (2016):

LanguageCoarseFine
Bulgarian (bg)IndoeuropeanSlavic
Czech (cs)IndoeuropeanSlavic
Danish (da)IndoeuropeanGermanic
German (de)IndoeuropeanGermanic
English (en)IndoeuropeanGermanic
Spanish (es)IndoeuropeanRomance
Basque (eu)Language isolate-
Persian (fa)IndoeuropeanIndo-Iranian
Finnish (fi)non-IEUralic
French (fr)IndoeuropeanRomance
Hebrew (he)non-IESemitic
Hindi (hi)IndoeuropeanIndo-Iranian
Croatian (hr)IndoeuropeanSlavic
Indonesian (id)non-IEAustronesian
Italian (it)IndoeuropeanRomance
Dutch (nl)IndoeuropeanGermanic
Norwegian (no)IndoeuropeanGermanic
Polish (pl)IndoeuropeanSlavic
Portuguese (pt)IndoeuropeanRomance
Slovenian (sl)IndoeuropeanSlavic
Swedish (sv)IndoeuropeanGermanic

Model

We use the latest version of Flair in our experiments. The next figure shows a high-level overview of the used architecture for PoS tagging:

图片.png

For training our PoS tagging models we use a BiLSTM with a hidden size of 256, a mini batch size of 8 and an initial learning rate of 0.1. We reduce the learning rate by a factor of 0.5 with a patience of 3. This factor determines the number of epochs with no improvement after which learning rate will be reduced. We train for a maximum of 500 epochs and use stochastic gradient descent as optimizer. For each language we train PoS tagging models and evaluate 3 runs and report an averaged accuracy score.

Pre-training

We trained contextualized string embeddings for 16 of 21 languages in the Universal Dependencies. For the remaining 5 languages (English, French, German, Portugese and Spanish) embeddings were trained by Akbik et al. (2018).

For each language, a recent Wikipedia dump and various texts from the OPUS corpora collection are used for training. A detailed overview can be found in the flair-lms repository.

Additionally, we trained one language model for over 300 languages on the recently released JW300 corpus. The corpus size of the JW300 corpus is 2.025.826.977 token. We trained a model over 5 epochs (both for the forward and backward language model).

Results

The next table shows the results for all trained contextualized string embeddings (Flair Embeddings):

Lang.BiLSTMAdv.MultiBPEmbBERT + BPEmbJW300Flair Embeddings
Avg.96.4096.6596.6296.7796.7797.59
Indoeuropean96.6396.9496.9997.1296.9997.78
non-Indo.95.4195.6294.8795.0395.7596.63
Germanic95.4995.8095.9796.1596.2196.89
Romance96.9397.3197.2597.3597.3397.81
Slavic97.5097.8797.9898.0098.1998.71
bg97.9798.5398.7098.7098.9799.15
cs98.2498.8198.9099.0098.8399.14
da96.3596.7497.0097.2097.7298.51‡
de93.3894.3594.0094.4094.1294.88
en95.1695.8295.6096.1096.0896.89
es95.7496.4496.5096.8096.6797.39
eu95.5194.7195.6096.0096.1197.32‡
fa97.4997.5197.1097.3094.0698.17
fi95.8595.4094.6094.3096.5998.09‡
fr96.1196.6396.2096.5096.3696.65
he96.9697.4396.6097.3096.7197.81
hi97.1097.9797.0097.4097.1897.89
hr96.8296.3296.8096.8096.9397.55
id93.4194.0393.4093.5093.9693.99
it97.9598.0898.1098.0098.1598.50
nl93.3093.0993.8093.3093.0693.85
no98.0398.0898.1098.5098.3898.74
pl97.6297.5797.5097.6097.9998.68‡
pt97.9098.0798.2098.1098.1598.71
sl96.8498.1198.0097.9098.2499.01
sv96.6996.7097.3097.4097.9198.49

BiLSTM refers to the tagger, proposed by Plank et al. (2016)Adv. refers to adversarial training as proposed by Yasunaga et al. (2017).

MultiBPEmb and BERT + BPEmb refer to the Heinzerling and Strube (2019) paper.

‡ indicates a performance boost of > 1% compared to previous state-of-the-art.

Experiments

This section shows how to re-run and re-produce the results for PoS tagging on various languages.

Universal Dependencies v1.2

The train, dev and test datasets are used from:

https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-1548

This is the Universal Dependencies corpus in version 1.2.

Thus, this data needs to be downloaded with:

curl --remote-name-all https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11234/1-1548{/ud-treebanks-v1.2.tgz}

Extract the downloaded archive with:

tar -xzf ud-treebanks-v1.2.tgz

Runner

The configuration for all experiments are stored in json-based configuration files. They are located in the ./configs folder. In the ./configs folder you will find two subfolders: flair and jw300flair refers to experiments that use Flair Embeddings, jw300 refers to experiments with the JW300 Flair Embeddings.

You can easily adjust hyper-parameters or you can even experiment with stacking more embeddings: just have a look at the configuration files.

The so called "experiment runner" script has two arguments:

  • --number - which is used as a kind of identifier for an experiment

  • --config - which defines the path to the configuration file

Example usage: if you want to re-produce the experiment for Bulgarian just use:

$ python run_experiment.py --config configs/flair/bg-flair.json --number 1

Evaluation

In order to evaluate a trained PoS tagging model, just use the predict.py script. This script expects two arguments:

  • Language (name), like Bulgarian

  • Model path, like resources/taggers/experiment_Bulgarian_UD_with_Flair_Embeddings_2/best-model.pt

Please make sure, that you use the full path, incl. the best-model.pt part!

Example usage:

$ python predict.py Bulgarian resources/taggers/experiment_Bulgarian_UD_with_Flair_Embeddings_1/best-model.pt

Caveats

If you want to train a model for Czech, then you first need to concatenate all training files:

$ cd universal-dependencies-1.2/UD_Czech/
$ cat cs-ud-train-*.conllu > cs-ud-train.conll

We use all available training files for training a Czech model.

If you want to train a model on the JW300 corpus, you currently need to download these Flair Embeddings manually:

$ wget https://schweter.eu/cloud/flair-lms/lm-jw300-forward-v0.1.pt
$ wget https://schweter.eu/cloud/flair-lms/lm-jw300-backward-v0.1.pt

Then you can e.g. launch an experiment for Bulgarian using the JW300 Flair Embeddings:

$ python run_experiment.py --config configs/jw300/bg-jw300.json --number 2

More information about the JW300 model can be found here.

ToDo

Citing

Originally, I wrote a short-paper about the multilingual evaluation that is presented in this repository. The paper was accepted at the Konvens 2019 conference. However, I did withdraw the paper because I felt uncomfortable with the analysis section. For example it is still unclear, why the performance of the Finnish model is over 2% better than previous state-of-the-art models, whereas the corpus used for pretraining is relatively small compared to other languages.

However, if you use the pretrained language models in your work, please cite this GitHub repository 

上一篇:flair-as-service

下一篇:jira-ric-flair-wooo

用户评价
全部评价

热门资源

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • shih-styletransfer

    shih-styletransfer Code from Style Transfer ...