Abstract
Cross-modal hashing intends to project data from
two modalities into a common hamming space to
perform cross-modal retrieval efficiently. Despite
satisfactory performance achieved on real applications, existing methods are incapable of effectively
preserving semantic structure to maintain interclass relationship and improving discriminability
to make intra-class samples aggregated simultaneously, which thus limits the higher retrieval performance. To handle this problem, we propose
Equally-Guided Discriminative Hashing (EGDH),
which jointly takes into consideration semantic
structure and discriminability. Specifically, we discover the connection between semantic structure
preserving and discriminative methods. Based on
it, we directly encode multi-label annotations that
act as high-level semantic features to build a common semantic structure preserving classifier. With
the common classifier to guide the learning of different modal hash functions equally, hash codes of
samples are intra-class aggregated and inter-class
relationship preserving. Experimental results on
two benchmark datasets demonstrate the superiority of EGDH compared with the state-of-the-arts