资源论文Learning semantic relationships for better action retrieval in images

Learning semantic relationships for better action retrieval in images

2019-12-19 | |  54 |   36 |   0

Abstract

Human actions capture a wide variety of interactions between people and objects. As a result, the set of possible actions is extremely large and it is diffificult to obtain suffificient training examples for all actions. However, we could compensate for this sparsity in supervision by leveraging the rich semantic relationship between different actions. A single action is often composed of other smaller actions and is exclusive of certain others. We need a method which can reason about such relationships and extrapolate unobserved actions from known actions. Hence, we propose a novel neural network framework which jointly extracts the relationship between actions and uses them for training better action retrieval models. Our model incorporates linguistic, visual and logical consistency based cues to effectively identify these relationships. We train and test our model on a largescale image dataset of human actions. We show a signifificant improvement in mean AP compared to different baseline methods including the HEX-graph approach from Deng et al. [8]

上一篇:Recognize Complex Events from Static Images by Fusing Deep Channels

下一篇:Face Video Retrieval with Image Query via Hashing across Euclidean Space and Riemannian Manifold

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...