资源论文H-Net: Neural Network for Cross-domain Image Patch Matching

H-Net: Neural Network for Cross-domain Image Patch Matching

2019-11-07 | |  93 |   58 |   0
Abstract Describing the same scene with different imaging style or rendering image from its 3D model gives us different domain images. Different domain images tend to have a gap and different local appearances, which raise the main challenge on the cross-domain image patch matching. In this paper, we propose to incorporate AutoEncoder into Siamese network, named as H-Net, of which the structural shape resembles the letter H. The H-Net achieves stateof-the-art performance on the cross-domain image patch matching. Furthermore, we improved H-Net to H-Net++. The H-Net++ extracts invariant feature descriptors in cross-domain image patches and achieves state-of-the-art performance by feature retrieval in Euclidean space. As there is no benchmark dataset including cross-domain images, we made a cross-domain image dataset which consists of camera images, rendering images from UAV 3D model, and images generated by CycleGAN algorithm. Experiments show that the proposed H-Net and H-Net++ outperform the existing algorithms. Our code and cross-domain image dataset are available at https://github.com/Xylon-Sean/H-Net.

上一篇:When Image Denoising Meets High-Level Vision Tasks: A Deep Learning Approach

下一篇:DEL: Deep Embedding Learning for Efficient Image Segmentation

用户评价
全部评价

热门资源

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Rating-Boosted La...

    The performance of a recommendation system reli...

  • Hierarchical Task...

    We extend hierarchical task network planning wi...