botorch
BoTorch is a library for Bayesian Optimization built on PyTorch.
BoTorch is currently in beta and under active development!
BoTorch
Provides a modular and easily extensible interface for composing Bayesian optimization primitives, including probabilistic models, acquisition functions, and optimizers.
Harnesses the power of PyTorch, including auto-differentiation, native support for highly parallelized modern hardware (e.g. GPUs) using device-agnostic code, and a dynamic computation graph.
Supports Monte Carlo-based acquisition functions via the reparameterization trick, which makes it straightforward to implement new ideas without having to impose restrictive assumptions about the underlying model.
Enables seamless integration with deep and/or convolutional architectures in PyTorch.
Has first-class support for state-of-the art probabilistic models in GPyTorch, including support for multi-task Gaussian Processes (GPs) deep kernel learning, deep GPs, and approximate inference.
The primary audience for hands-on use of BoTorch are researchers and sophisticated practitioners in Bayesian Optimization and AI. We recommend using BoTorch as a low-level API for implementing new algorithms for Ax. Ax has been designed to be an easy-to-use platform for end-users, which at the same time is flexible enough for Bayesian Optimization researchers to plug into for handling of feature transformations, (meta-)data management, storage, etc. We recommend that end-users who are not actively doing research on Bayesian Optimization simply use Ax.
Installation Requirements
Python >= 3.6
PyTorch >= 1.2
gpytorch >= 0.3.5
scipy
The latest release of BoTorch is easily installed either via Anaconda (recommended):
conda install botorch -c pytorch -c gpytorch
or via pip
:
pip install botorch
You can customize your PyTorch installation (i.e. CUDA version, CPU only option) by following the PyTorch installation instructions.
Important note for MacOS users:
You will want to make sure your PyTorch build is linked against MKL (the non-optimized version of BoTorch can be up to an order of magnitude slower in some settings). Setting this up manually on MacOS can be tricky - to ensure this works properly please follow the PyTorch installation instructions.
If you need CUDA on MacOS, you will need to build PyTorch from source. Please consult the PyTorch installation instructions above.
If you'd like to try our bleeding edge features (and don't mind potentially running into the occasional bug here or there), you can install the latest master directly from GitHub (this will also require installing the current GPyTorch master):
pip install git+https://github.com/cornellius-gp/gpytorch.git pip install git+https://github.com/pytorch/botorch.git
Manual / Dev install
Alternatively, you can do a manual install. For a basic install, run:
git clone https://github.com/pytorch/botorch.gitcd botorch pip install -e .
To customize the installation, you can also run the following variants of the above:
pip install -e .[dev]
: Also installs all tools necessary for development (testing, linting, docs building; see Contributing below).
pip install -e .[tutorials]
: Also installs all packages necessary for running the tutorial notebooks.
Here's a quick run down of the main components of a Bayesian optimization loop. For more details see our Documentation and the Tutorials.
Fit a Gaussian Process model to data
import torchfrom botorch.models import SingleTaskGPfrom botorch.fit import fit_gpytorch_modelfrom gpytorch.mlls import ExactMarginalLogLikelihood train_X = torch.rand(10, 2) Y = 1 - (train_X - 0.5).norm(dim=-1, keepdim=True) # explicit output dimensionY += 0.1 * torch.rand_like(Y) train_Y = (Y - Y.mean()) / Y.std() gp = SingleTaskGP(train_X, train_Y) mll = ExactMarginalLogLikelihood(gp.likelihood, gp) fit_gpytorch_model(mll)
Construct an acquisition function
from botorch.acquisition import UpperConfidenceBoundUCB = UpperConfidenceBound(gp, beta=0.1)
Optimize the acquisition function
from botorch.optim import optimize_acqf bounds = torch.stack([torch.zeros(2), torch.ones(2)]) candidate, acq_value = optimize_acqf( UCB, bounds=bounds, q=1, num_restarts=5, raw_samples=20, )
See the CONTRIBUTING file for how to help out.
BoTorch is MIT licensed, as found in the LICENSE file.
上一篇:ptstat
下一篇:pytorch-extras
还没有评论,说两句吧!
热门资源
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
capsnet-with-caps...
CapsNet with capsule-wise convolution Project ...
inferno-boilerplate
This is a very basic boilerplate example for pe...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com