Abstract
Semantic embeddings for images and sentences have
been widely studied recently. The ability of deep neural networks on learning rich and robust visual and textual representations offers the opportunity to develop effective semantic embedding models. Currently, the state-of-the-art
approaches in semantic learning first employ deep neural
networks to encode images and sentences into a common
semantic space. Then, the learning objective is to ensure
a larger similarity between matching image and sentence
pairs than randomly sampled pairs. Usually, Convolutional
Neural Networks (CNNs) and Recurrent Neural Networks
(RNNs) are employed for learning image and sentence representations, respectively. On one hand, CNNs are known to
produce robust visual features at different levels and RNNs
are known for capturing dependencies in sequential data.
Therefore, this simple framework can be sufficiently effective in learning visual and textual semantics. On the other
hand, different from CNNs, RNNs cannot produce middlelevel (e.g. phrase-level in text) representations. As a result, only global representations are available for semantic learning. This could potentially limit the performance
of the model due to the hierarchical structures in images
and sentences. In this work, we apply Convolutional Neural Networks to process both images and sentences. Consequently, we can employ mid-level representations to assist global semantic learning by introducing a new learning
objective on the convolutional layers. The experimental results show that our proposed textual CNN models with the
new learning objective lead to better performance than the
state-of-the-art approaches