资源算法torch-nccl

torch-nccl

2019-12-24 | |  41 |   0 |   0

nccl.torch

Torch7 FFI bindings for NVidia NCCL library.

Installation

Collective operations supported

  • allReduce

  • reduce

  • broadcast

  • allGather

Example usage

Argument to the collective call should be a table of contiguous tensors located on the different devices. Example: perform in-place allReduce on the table of tensors:

require 'nccl'nccl.allReduce(inputs)

where inputs is a table of contiguous tensors of the same size located on the different devices.


上一篇:Forma

下一篇:remorac

用户评价
全部评价

热门资源

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...