资源论文Efficient end-to-end learning for quantizable representations

Efficient end-to-end learning for quantizable representations

2020-03-16 | |  41 |   35 |   0

Abstract

Embedding representation learning via neural networks is at the core foundation of modern similarity based search. While much effort has been put in developing algorithms for learning binary hamming code representations for search efficiency, this still requires a linear scan of the entire da per each query and trades off the search accuracy through binarization. To this end, we consider the problem of directly learning a quantizable embedding representation and the sparse binary hash code end-to-end which can be used to construct an efficient hash table not only providing significan search reduction in the number of data but also achieving the state of the art search accuracy out performing previous state of the art deep metric learning methods. We also show that finding the optimal sparse binary hash code in a mini-batch can be computed exactly in polynomial time by solving a minimum cost flow problem. Our results on Cifar-100 and on ImageNet datasets show the state of the art search accuracy in precision@k and NMI metrics while providing up to 98?and 478?search speedup respectively over exhaustive linear search. The source code is available at https://github.com/maestrojeong/Deep-HashTable-ICML18.

上一篇:A Distributed Second-Order Algorithm You Can Trust

下一篇:An Estimation and Analysis Framework for the Rasch Model

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...