资源算法modulated-deform-conv

modulated-deform-conv

2020-04-01 | |  36 |   0 |   0

modulated-deform-conv

该项目是一个 Pytorch C++ and CUDA Extension,采用C++和Cuda实现了deformable-conv2d,modulated-deformable-conv2d,deformable-conv3d,modulated-deformable-conv3d的forward function和backward function,并在Python中对其进行了包装。
This Project is a Pytorch C++ and CUDA Extension, which implements the forward function and backward function for deformable-conv2d, modulated-deformable-conv2d, deformable-conv3d, modulated-deformable-conv3d, then encapsulates C++ and CUDA code into Python Package.

安装 Install

  • run pip install modulated-deform-conv

  • or git clone https://github.com/CHONSPQX/modulated-deform-conv.git,then cd modulated-deform-conv and run python setup.py install

要求 Requires

  • Python 3

  • Pytorch>=1.3

  • Linux, gcc版本>=4.9(For Linux, gcc version>=4.9)

  • Windows,CUDA版本需要VS版本兼容(For Windows, CUDA version must be compatiable with Visual Studio version)

由于资源有限,目前测试过的环境有(Because of limited resources, only the following environment are tested)

  • Ubuntu18.04 , gcc 7.4 , CUDA 10.2 ,Python3.7.4, Pytorch 1.3.1

  • Ubuntu18.04 , gcc 7.4 , CUDA 10.2 ,Python3.7.4, Pytorch 1.4.0

  • Windows10 , Visual Studio 2017 , CUDA 10.1 ,Python3.7.6, Pytorch 1.4.0

速度优化 Speed Optimization

  • pip download modulated-deform-conv 解压得到的压缩文件,进入modulated-deform-conv,打开src/config.h,用户可根据自身显卡情况,设置以下两个变量,获得更快运行速度,然后运行 python setup.py install
    Unzip the downloaded compressed file, cd modulated-deform-conv, then open src/config.h,users are recommended to set the following VARIABLES to optimize run speed according to their NVIDIA GPU condition, then run python setup.py install

    • const int CUDA_NUM_THREADS

    • const int MAX_GRID_NUM

  • 运行时可以通过传递in_step参数来优化速度,该变量控制每次并行处理的batch 大小。
    Or users can set different in_step value in run time, which controls the batch size of each parallel processing .

使用 Use

直接使用C++函数,请import MDCONV_CUDA 使用封装后的python类,请import modulated_deform_conv Using C++ functions directly, please import MDCONV_CUDA Using the packaged function by Python, please import modulated_deform_conv

文档 Documents

1.C++ and CUDA Code

  • 文件 Files

FilenameContent
config.hmacro&gloabl variables&inline functions
deformable_conv.cuMDCONV_CUDA.deform_conv2d_forward_cuda MDCONV_CUDA.deform_conv2d_backward_cuda
mdeformable_conv.cuMDCONV_CUDA.modulated_deform_conv2d_forward_cuda MDCONV_CUDA.modulated_deform_conv2d_backward_cuda
deformable_conv3d.cuMDCONV_CUDA.deform_conv3d_forward_cuda MDCONV_CUDA.deform_conv3d_backward_cuda
mdeformable_conv3d.cuMDCONV_CUDA.modulated_deform_conv3d_forward_cuda MDCONV_CUDA.modulated_deform_conv2d_backward_cuda
utils.cusome code for display debug outputs
warp.cppglue code between C++ and Python
  • 变量 Variables

Variable NameTypeIntroduction
kernel_hconst intfirst dimension size of the convolution kernel
kernel_wconst intsecond dimension size of the convolution kernel
kernel_lconst intthird dimension size of the convolution kernel
stride_hconst intstride for first dimension
stride_wconst intstride for second dimension
stride_lconst intstride for third dimension
pad_hconst intzero padding for first dimension
pad_wconst intzero padding for second dimension
pad_lconst intzero padding for third dimension
dilation_hconst intdilation rate for first dimension
dilation_wconst intdilation rate for second dimension
dilation_lconst intdilation rate for third dimension
groupconst intgroup of convolution
deformable_groupconst intgroup of offset and mask
in_stepconst intbatch size of each parallel processing
with_biasconst boolif have bias
inputat::TensorB,I,H,W[,L],I must be divisible bygroup and deformable_group
grad_inputat::Tensorgrad_input must be size like input
weightat::TensorO,I/group,H,W[,L]Omust be divisible bygroup
grad_weightat::Tensorgrad_weight must be size like weight
biasat::Tensor[O], if with_bias=truebias must be non-null
grad_biasat::Tensorgrad_bias must be size like bias
offsetat::TensorB,deformable_group*2*kernel_h*kernel_w,H,W B,deformable_group*3*kernel_h*kernel_w*kernel_l,H,W,L
grad_offsetat::Tensorgrad_offset must be size like offset
maskat::TensorB,deformable_group*kernel_h*kernel_w,H,W B,deformable_group*kernel_h*kernel_w*kernel_l,H,W,L
grad_maskat::Tensorgrad_mask must be size like mask
outputat::TensorB,O,OH,OW[,OL]
grad_outputat::Tensorgrad_output must be size like output

2.Python Code

Class NameType
class DeformConv2dFunctiontorch.autograd.Function
class ModulatedDeformConv2dFunctiontorch.autograd.Function
class DeformConv3dFunctiontorch.autograd.Function
class ModulatedDeformConv3dFunctiontorch.autograd.Function
class DeformConv2dtorch.nn.Module
class ModulatedDeformConv2dtorch.nn.Module
class DeformConv3dtorch.nn.Module
class ModulatedDeformConv3dtorch.nn.Module

Author

Xin Qiao qiaoxin182@gmail.com

License

Copyright (c) 2020 Xin Qiao Released under the MIT license


上一篇:deform_conv3d_pytorch_op

下一篇:deformable-3d-convnets

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...