资源算法pytorch-baidu-ctc

pytorch-baidu-ctc

2020-01-08 | |  40 |   0 |   0

torch-baidu-ctc

Build Status

Pytorch bindings for Baidu's Warp-CTC. These bindings were inspired bySeanNaren's but these include some bug fixes, and offer some additional features.

import torchfrom torch_baidu_ctc import ctc_loss, CTCLoss# Activations. Shape T x N x D.# T -> max number of frames/timesteps# N -> minibatch size# D -> number of output labels (including the CTC blank)x = torch.rand(10, 3, 6)# Target labelsy = torch.tensor(
  [    # 1st sample
    1, 1, 2, 5, 2,    # 2nd
    1, 5, 2,    # 3rd
    4, 4, 2, 3,
  ],  dtype=torch.int,
)# Activations lengthsxs = torch.tensor([10, 6, 9], dtype=torch.int)# Target lengthsys = torch.tensor([5, 3, 4], dtype=torch.int)# By default, the costs (negative log-likelihood) of all samples are summed.# This is equivalent to:#   ctc_loss(x, y, xs, ys, average_frames=False, reduction="sum")loss1 = ctc_loss(x, y, xs, ys)# You can also average the cost of each sample among the number of frames.# The averaged costs are then summed.loss2 = ctc_loss(x, y, xs, ys, average_frames=True)# Instead of summing the costs of each sample, you can perform# other `reductions`: "none", "sum", or "mean"## Return an array with the loss of each individual samplelosses = ctc_loss(x, y, xs, ys, reduction="none")## Compute the mean of the individual lossesloss3 = ctc_loss(x, y, xs, ys, reduction="mean")## First, normalize loss by number of frames, later average lossesloss4 = ctc_loss(x, y, xs, ys, average_frames=True, reduction="mean")# Finally, there's also a nn.Module to use this loss.ctc = CTCLoss(average_frames=True, reduction="mean", blank=0)
loss4_2 = ctc(x, y, xs, ys)# Note: the `blank` option is also available for `ctc_loss`.# By default it is 0.

Requirements

  • C++11 compiler (tested with GCC 4.9).

  • Python: 2.7, 3.5, 3.6, 3.7 (tested with versions 2.7, 3.5 and 3.6).

  • PyTorch >= 1.1.0 (tested with version 1.1.0).

  • For GPU support: CUDA Toolkit.

Installation

The installation process should be pretty straightforward assuming that you have correctly installed the required libraries and tools.

The setup process compiles the package from source, and will compile with CUDA support if this is available for PyTorch.

From Pypi (recommended)

pip install torch-baidu-ctc

From GitHub

git clone --recursive https://github.com/jpuigcerver/pytorch-baidu-ctc.gitcd pytorch-baidu-ctc
python setup.py build
python setup.py install

AVX512 related issues

Some compiling problems may arise when using CUDA and newer host compilers with AVX512 instructions. Please, install GCC 4.9 and use it as the host compiler for NVCC. You can simply set the CC and CXX environment variables before the build/install commands:

CC=gcc-4.9 CXX=g++-4.9 pip install torch-baidu-ctc

or (if you are using the GitHub source code):

CC=gcc-4.9 CXX=g++-4.9 python setup.py build

Testing

You can test the library once installed using unittest. In particular, run the following commands:

python -m unittest torch_baidu_ctc.test

All tests should pass (CUDA tests are only executed if supported).


上一篇:dni.pytorch

下一篇:inferno

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...