资源算法GPT-2-Speech-Output

GPT-2-Speech-Output

2020-03-03 | |  60 |   0 |   0

OUTPUT ONLY!!!

See my GPT-2-Speech repo for a full Siri or Alexa-like interface :-)

To get speech up and running:

sudo pip3 install espeak pyttsx pydub gTTS

brew install ffmpeg (use apt-get on linux)

Voices & Linux Support

The code is set up to run out of the box on Mac, but with linux you will need to add in the reference to the voice model (I find they sound far more robotic than Apple's though) You can also use this script to find the other voices on Mac.

import pyttsx3
engine = pyttsx3.init()

voices = engine.getProperty('voices')for voice in voices:    print("Voice:")    print(" - ID: %s" % voice.id)    print(" - Name: %s" % voice.name)    print(" - Languages: %s" % voice.languages)    print(" - Gender: %s" % voice.gender)    print(" - Age: %s" % voice.age)

Next, you will go the "interactive_conditional_samples.py" file and edit line 18 to include the path for the voice model

Remember to download the models as instructed below - setup is the same

gpt-2

Code from the paper "Language Models are Unsupervised Multitask Learners".

We have currently released small (124M parameter), medium (355M parameter), and large (774M parameter) versions of GPT-2*, with only the full model as of yet unreleased. We have also released a dataset for researchers to study their behaviors.

You can read about GPT-2 and release decisions in our original blog post and 6 month follow-up post.

* Note that our original parameter counts were wrong due to an error (in our previous blog posts and paper). Thus you may have seen small referred to as 117M and medium referred to as 345M.

Usage

This repository is meant to be a starting point for researchers and engineers to experiment with GPT-2.

For basic information, see our model card.

Some caveats

  • GPT-2 models' robustness and worst case behaviors are not well-understood. As with any machine-learned model, carefully evaluate GPT-2 for your use case, especially if used without fine-tuning or in safety-critical applications where reliability is important.

  • The dataset our GPT-2 models were trained on contains many texts with biases and factual inaccuracies, and thus GPT-2 models are likely to be biased and inaccurate as well.

  • To avoid having samples mistaken as human-written, we recommend clearly labeling samples as synthetic before wide dissemination. Our models are often incoherent or inaccurate in subtle ways, which takes more than a quick read for a human to notice.

Work with us

Please let us know if you’re doing interesting research with or working on applications of GPT-2! We’re especially interested in hearing from and potentially working with those who are studying

  • Potential malicious use cases and defenses against them (e.g. the detectability of synthetic text)

  • The extent of problematic content (e.g. bias) being baked into the models and effective mitigations

Development

See DEVELOPERS.md

Contributors

See CONTRIBUTORS.md

Citation

Please use the following bibtex entry:

@article{radford2019language,
  title={Language Models are Unsupervised Multitask Learners},
  author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
  year={2019}
}

Future work

We may release code for evaluating the models on various benchmarks.

We are still considering release of the larger models.

License

MIT


上一篇:train-gpt-2-model

下一篇:GPT-2-Speech

用户评价
全部评价

热门资源

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...