pydrobert-kaldi
This is student-driven code, so don't expect a stable API. I'll try to use semantic versioning, but the best way to keep functionality stable is by forking.
Some Kaldi SWIG bindings for Python. I started this project because I wanted to seamlessly incorporate Kaldi's I/O mechanism into the gamut of Python-based data science packages (e.g. Theano, Tensorflow, CNTK, PyTorch, etc.). The code base is expanding to wrap more of Kaldi's feature processing and mathematical functions, but is unlikely to include modelling or decoding.
Eventually, I plan on adding hooks for Kaldi audio features and pre-/post- processing. However, I have no plans on porting any code involving modelling or decoding.
Most I/O can be performed with the pydrobert.kaldi.io.open
function:
>>> from pydrobert.kaldi import io >>> with io.open('scp:foo.scp', 'bm') as f: >>> for matrix in f: >>> pass # do something
open
is a factory function that determines the appropriate underlying stream to open, much like Python's built-in open
. The data types we can read (here, a BaseMatrix
) are listed in pydrobert.kaldi.io.enums.KaldiDataType
. Big data types, like matrices and vectors, are piped into Numpy arrays. Passing an extended filename (e.g. paths to files on discs, '-'
for stdin/stdout, 'gzip -c a.ark.gz |'
, etc.) opens a stream from which data types can be read one-by-one and in the order they were written. Alternatively, prepending the extended filename with 'ark[,[option_a[,option_b...]]:'
or 'scp[,...]:'
and specifying a data type allows one to open a Kaldi table for iterator-like sequential reading (mode='r'
), dict-like random access reading (mode='r+'
), or writing (mode='w'
). For more information on the open function, consult the docstring. Information on Kaldi I/O can be found on their website.
The submodule pydrobert.kaldi.io.corpus
contains useful wrappers around Kaldi I/O to serve up batches of data to, say, a neural network:
>>> train = ShuffledData('scp:feats.scp', 'scp:labels.scp', batch_size=10) >>> for feat_batch, label_batch in train: >>> pass # do something
By default, Kaldi error, warning, and critical messages are piped to standard error. The pydrobert.kaldi.logging
submodule provides hooks into python's native logging interface: the logging
module. The KaldiLogger
can handle stack traces from Kaldi C++ code, and there are a variety of decorators to finagle the kaldi logging patterns to python logging patterns, or vice versa.
You'd likely want to explicitly handle logging when creating new kaldi-style commands for command line. pydrobert.kaldi.io.argparse
provides KaldiParser
, an ArgumentParser
tailored to Kaldi inputs/outputs. It is used by a few command-line entry points added by this package. They are:
write-table-to-pickle
Write the contents of a kaldi table to a pickle file(s). Good for late night attempts at reaching a paper deadline.
write-pickle-to-table
Write the contents of of a pickle file(s) to a kaldi table.
write-table-to-torch-dir
Write the contents of a kaldi table into a directory as a series of PyTorch tensor files. Suitable for PyTorch data loaders with multiprocessing. Requires PyTorch.
write-torch-dir-to-table
Write the contents of a directory of tensor files back to a Kaldi table. Requires PyTorch.
normalize-feat-lens
Ensure that features have the same length as some reference by truncating or appending frames.
compute-error-rate
Compute an error rate between reference and hypothesis texts, such as a WER or PER.
normalize-feat-lens
Ensure features match some reference length, either by padding or clipping the end.
Check the following compatibility table to see if you can get a PyPI or Conda install going:
Platform | Arch | Python | Conda? | PyPI? |
---|---|---|---|---|
Windows | 32 | 2.7 | No | No |
Windows | 32 | 3.5 | Yes | No |
Windows | 32 | 3.6 | Yes | No |
Windows | 32 | 3.7 | Yes | No |
Windows | 64 | 2.7 | No | No |
Windows | 64 | 3.5 | Yes | No |
Windows | 64 | 3.6 | Yes | No |
Windows | 64 | 3.7 | Yes | No |
OSX | 32 | No | No | |
OSX | 64 | 2.7 | Yes | Yes |
OSX | 64 | 3.5 | Yes | Yes |
OSX | 64 | 3.6 | Yes | Yes |
OSX | 64 | 3.7 | Yes | Yes |
Linux | 32 | No | No | |
Linux | 64 | 2.7 | Yes | Yes |
Linux | 64 | 3.5 | Yes | Yes |
Linux | 64 | 3.6 | Yes | Yes |
Linux | 64 | 3.7 | Yes | Yes |
To install via conda
:
conda install -c sdrobert pydrobert-kaldi
To install via pip
:
pip install pydrobert-kaldi
You can also try building from source, but you'll have to specify where your BLAS install is somehow:
# for OpenBLAS OPENBLASROOT=/path/to/openblas/install pip install git+https://github.com/sdrobert/pydrobert-kaldi.git # for MKL MKLROOT=/path/to/mkl/install pip install git+https://github.com/sdrobert/pydrobert-kaldi.git # see setup.py for more options
This code is licensed under Apache 2.0.
Code found under the src/
directory has been primarily copied from Kaldi. setup.py
is also strongly influenced by Kaldi's build configuration. Kaldi is also covered by the Apache 2.0 license; its specific license file was copied into src/COPYING_Kaldi_Project
to live among its fellows.
Please see the pydrobert page for more details.
还没有评论,说两句吧!
热门资源
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
capsnet-with-caps...
CapsNet with capsule-wise convolution Project ...
shih-styletransfer
shih-styletransfer Code from Style Transfer ...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com