Diverse feature visualizations reveal invariances
in early layers of deep neural networks
Abstract. Visualizing features in deep neural networks (DNNs) can
help understanding their computations. Many previous studies aimed to
visualize the selectivity of individual units by inding meaningful images
that maximize their activation. However, comparably little attention has
been paid to visualizing to what image transformations units in DNNs
are invariant. Here we propose a method to discover invariances in the
responses of hidden layer units of deep neural networks. Our approach
is based on simultaneously searching for a batch of images that strongly
activate a unit while at the same time being as distinct from each other
as possible. We ind that even early convolutional layers in VGG-19 exhibit various forms of response invariance: near-perfect phase invariance
in some units and invariance to local difeomorphic transformations in
others. At the same time, we uncover representational diferences with
ResNet-50 in its corresponding layers. We conclude that invariance transformations are a major computational component learned by DNNs and
we provide a systematic method to study them