Abstract
We present a principled approach to uncover the structure of visual data by solving a novel deep learning task
coined visual permutation learning. The goal of this task is
to find the permutation that recovers the structure of data
from shuffled versions of it. In the case of natural images,
this task boils down to recovering the original image from
patches shuffled by an unknown permutation matrix. Unfortunately, permutation matrices are discrete, thereby posing difficulties for gradient-based methods. To this end,
we resort to a continuous approximation of these matrices
using doubly-stochastic matrices which we generate from
standard CNN predictions using Sinkhorn iterations. Unrolling these iterations in a Sinkhorn network layer, we propose DeepPermNet, an end-to-end CNN model for this task.
The utility of DeepPermNet is demonstrated on two
challenging computer vision problems, namely, (i) relative
attributes learning and (ii) self-supervised representation
learning. Our results show state-of-the-art performance on
the Public Figures and OSR benchmarks for (i) and on the
classification and segmentation tasks on the PASCAL VOC
dataset for (ii).