资源论文Dense Transformer Networks for Brain Electron Microscopy Image Segmentation

Dense Transformer Networks for Brain Electron Microscopy Image Segmentation

2019-10-08 | |  101 |   48 |   0
Abstract The key idea of current deep learning methods for dense prediction is to apply a model on a regular patch centered on each pixel to make pixelwise predictions. These methods are limited in the sense that the patches are determined by network architecture instead of learned from data. In this work, we propose the dense transformer networks, which can learn the shapes and sizes of patches from data. The dense transformer networks employ an encoder-decoder architecture, and a pair of dense transformer modules are inserted into each of the encoder and decoder paths. The novelty of this work is that we provide technical solutions for learning the shapes and sizes of patches from data and efficiently restoring the spatial correspondence required for dense prediction. The proposed dense transformer modules are differentiable, thus the entire network can be trained. We apply the proposed networks on biological image segmentation tasks and show superior performance is achieved in comparison to baseline methods

上一篇:Cascading Non-Stationary Bandits: Online Learning to Rank in the Non-Stationary Cascade Model

下一篇:Feature Prioritization and Regularization Improve Standard Accuracy and Adversarial Robustness

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...