资源论文Texture Representations for Image and Video Synthesis

Texture Representations for Image and Video Synthesis

2019-12-19 | |  60 |   43 |   0

Abstract

In texture synthesis and classifification, algorithms require a small texture to be provided as an input, which is assumed to be representative of a larger region to be resynthesized or categorized. We focus on how to characterize such textures and automatically retrieve them. Most works generate these small input textures manually by cropping, which does not ensure maximal compression, nor that the selection is the best representative of the original. We construct a new representation that compactly summarizes a texture, while using less storage, that can be used for texture compression and synthesis. We also demonstrate how the representation can be integrated in our proposed video texture synthesis algorithm to generate novel instances of textures and video hole-fifilling. Finally, we propose a novel criterion that measures structural and statistical dissimilarity between textures

上一篇:Illumination and Reflectance Spectra Separation of a Hyperspectral Image Meets Low-Rank Matrix Factorization

下一篇:FlowWeb: Joint Image Set Alignment by Weaving Consistent, Pixel-wise Correspondences

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...