资源算法TTS

TTS

2019-09-09 | |  111 |   0 |   0

TTS (Work in Progress...)

TTS targets a Text2Speech engine lightweight in computation with hight quality speech construction.

Here we have pytorch implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model as the start point. We plan to improve the model by the time with new architectural changes.

You can find here a brief note pointing possible TTS architectures and their comparisons.

Requirements

Highly recommended to use miniconda for easier installation. * python 3.6 * pytorch 0.4 * librosa * tensorboard * tensorboardX * matplotlib * unidecode

Checkpoints and Audio Samples

Checkout here to compare the samples (except the first) below.

| Models | Commit | Audio Sample | | ------------- |:-----------------:|:-------------| | iter-6241099d56f7 | link| | Best: iter-170K | e00bc66 |link|

Data

Currently TTS provides data loaders for - LJ Speech

Training the network

To run your own training, you need to define a config.json file (simple template below) and call with the command.

train.py --config_path config.json

If you like to use specific set of GPUs.

CUDA_VISIBLE_DEVICES="0,1,4" train.py --config_path config.json

Each run creates an experiment folder with the corresponfing date and time, under the folder you set in config.json. And if there is no checkpoint yet under that folder, it is going to be removed when you press Ctrl+C.

You can also enjoy Tensorboard with couple of good training logs, if you point --logdir the experiment folder.

Example config.json:

{
  "num_mels": 80,
  "num_freq": 1025,
  "sample_rate": 22050,
  "frame_length_ms": 50,
  "frame_shift_ms": 12.5,
  "preemphasis": 0.97,
  "min_level_db": -100,
  "ref_level_db": 20,
  "embedding_size": 256,
  "text_cleaner": "english_cleaners",

  "epochs": 200,
  "lr": 0.002,
  "warmup_steps": 4000,
  "batch_size": 32,
  "eval_batch_size":32,
  "r": 5,
  "mk": 0.0,  // guidede attention loss weight. if 0 no use
  "priority_freq": true,  // freq range emphasis

  "griffin_lim_iters": 60,
  "power": 1.2,

  "dataset": "TWEB",
  "meta_file_train": "transcript_train.txt",
  "meta_file_val": "transcript_val.txt",
  "data_path": "/data/shared/BibleSpeech/",
  "min_seq_len": 0, 
  "num_loader_workers": 8,

  "checkpoint": true,  // if save checkpoint per save_step
  "save_step": 200,
  "output_path": "/path/to/my_experiment",
}

Testing

Best way to test your pretrained network is to use the Notebook under notebooks folder.

Contribution

Any kind of contribution is highly welcome as we are propelled by the open-source spirit. If you like to add or edit things in code, please also consider to write tests to verify your segment so that we can be sure things are on the track as this repo gets bigger.

TODO

Checkout issues and Project field.

References

Precursor implementations

  • https://github.com/keithito/tacotron (Dataset and Test processing)

  • https://github.com/r9y9/tacotron_pytorch (Initial Tacotron architecture)

上一篇:espnet

下一篇:Neural Sequence labeling model

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...