资源算法catalyst

catalyst

2019-10-10 | |  115 |   0 |   0

catalyst_logo.png

Reproducible and fast DL & RL

Pipi version Docs PyPI Status Github contributors License

Build Status Telegram Gitter Slack Donate

High-level utils for PyTorch DL & RL research. It was developed with a focus on reproducibility, fast experimentation and code/ideas reusing. Being able to research/develop something new, rather than write another regular train loop.

Break the cycle - use the Catalyst!


Installation

Common installation:

pip install -U catalyst

More specific with additional requirements:

pip install catalyst[dl] # installs DL based catalyst with Weights & Biases supportpip install catalyst[rl] # installs DL+RL based catalystpip install catalyst[drl] # installs DL+RL based catalyst with Weights & Biases supportpip install catalyst[contrib] # installs DL+contrib based catalystpip install catalyst[all] # installs everything. Very convenient to deploy on a new server

Catalyst is compatible with: Python 3.6+. PyTorch 0.4.1+.

Docs and examples

API documentation and an overview of the library can be found here Docs.

In the examples folder of the repository, you can find advanced tutorials and Catalyst best practices.

To learn more about Catalyst internals and to be aware of the most important features, you can read Catalyst-info, our blog where we regularly write facts about the framework.

Overview

Catalyst helps you write compact but full-featured DL & RL pipelines in a few lines of code. You get a training loop with metrics, early-stopping, model checkpointing and other features without the boilerplate.

Features

  • Universal train/inference loop.

  • Configuration files for model/data hyperparameters.

  • Reproducibility – all source code and environment variables will be saved.

  • Callbacks – reusable train/inference pipeline parts.

  • Training stages support.

  • Easy customization.

  • PyTorch best practices (SWA, AdamW, 1Cycle, FP16 and more).

Structure

  • DL – runner for training and inference, all of the classic machine learning and computer vision metrics and a variety of callbacks for training, validation and inference of neural networks.

  • RL – scalable Reinforcement Learning, on-policy & off-policy algorithms and their improvements with distributed training support.

  • contrib - additional modules contributed by Catalyst users.

  • data - useful tools and scripts for data processing.

Getting started: 30 seconds with Catalyst

import torchfrom catalyst.dl import SupervisedRunner# experiment setuplogdir = "./logdir"num_epochs = 42# dataloaders = {"train": ..., "valid": ...}# model, criterion, optimizermodel = Net()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer)# model runnerrunner = SupervisedRunner()# model trainingrunner.train(    model=model,    criterion=criterion,    optimizer=optimizer,    scheduler=scheduler,    loaders=loaders,    logdir=logdir,    num_epochs=num_epochs,    verbose=True)

For Catalyst.RL introduction, please follow OpenAI Gym example.

Docker

Please see the docker folder for more information and examples.

Contribution guide

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us.

Please see the contribution guide for more information.

68747470733a2f2f63352e70617472656f6e2e636f6d2f65787465726e616c2f6c6f676f2f6265636f6d655f615f706174726f6e5f627574746f6e2e706e67.png

Citation

Please use this bibtex if you want to cite this repository in your publications:

@misc{catalyst,
    author = {Kolesnikov, Sergey},
    title = {Reproducible and fast DL & RL.},
    year = {2018},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {url{https://github.com/catalyst-team/catalyst}},
}


上一篇:Tor10

下一篇:Ax

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...