Abstract
Bilinear pooling has been recently proposed as a feature
encoding layer, which can be used after the convolutional
layers of a deep network, to improve performance in multiple vision tasks. Different from conventional global average pooling or fully connected layer, bilinear pooling gathers 2nd order information in a translation invariant fashion. However, a serious drawback of this family of pooling
layers is their dimensionality explosion. Approximate pooling methods with compact properties have been explored
towards resolving this weakness. Additionally, recent results have shown that significant performance gains can be
achieved by adding 1st order information and applying matrix normalization to regularize unstable higher order information. However, combining compact pooling with matrix normalization and other order information has not been
explored until now. In this paper, we unify bilinear pooling and the global Gaussian embedding layers through the
empirical moment matrix. In addition, we propose a novel
sub-matrix square-root layer, which can be used to normalize the output of the convolution layer directly and mitigate
the dimensionality problem with off-the-shelf compact pooling methods. Our experiments on three widely used finegrained classification datasets illustrate that our proposed
architecture, MoNet, can achieve similar or better performance than with the state-of-art G2DeNet. Furthermore,
when combined with compact pooling technique, MoNet obtains comparable performance with encoded features with
96% less dimensions