资源论文Fully Understanding The Hashing Trick

Fully Understanding The Hashing Trick

2020-02-13 | |  68 |   53 |   0

Abstract

 Feature hashing, also known as the hashing trick, introduced by Weinberger et al. (2009), is one of the key techniques used in scaling-up machine learning algorithms. Loosely speaking, feature hashing uses a random sparse projection matrix A : image.png (where mimage.png n) in order to reduce the dimension of the data from n to m while approximately preserving the Euclidean norm. Every column of A contains exactly one non-zero entry, equals to either -1 or 1. Weinberger et al. showed tail bounds on image.png. Specifically they showed that for every image.png is sufficiently small, and m is sufficiently large, then image.png. These bounds were later extended by Dasgupta et al. (2010) and most recently refined by Dahlgaard et al. (2017), however, the true nature of the performance of this key technique, and specifically the correct tradeoff between the pivotal parameters image.png remained an open question. We settle this question by giving tight asymptotic bounds on the exact tradeoff between the central parameters, thus providing a complete understanding of the performance of feature hashing. We complement the asymptotic bound with empirical data, which shows that the constants “hiding” in the asymptotic notation are, in fact, very close to 1, thus further illustrating the tightness of the presented bounds in practice.

上一篇:Efficient Stochastic Gradient Hard Thresholding

下一篇:Graphical Generative Adversarial Networks

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...