资源论文The Nearest Neighbor Information Estimator is Adaptively Near Minimax Rate-Optimal

The Nearest Neighbor Information Estimator is Adaptively Near Minimax Rate-Optimal

2020-02-14 | |  53 |   45 |   0

Abstract

 We analyze the Kozachenko–Leonenko (KL) fixed k-nearest neighbor estimator for the differential entropy. We obtain the first uniform upper bound on its performance for any fixed k over Holder balls on a torus without assuming any conditions on how close the density could be from zero. Accompanying a recent minimax lower bound over the Holder ball, we show that the KL estimator for any fixed k is achieving the minimax rates up to logarithmic factors without cognizance of the smoothness parameter s of the Ho?lder ball for s image.png (0, 2] and arbitrary dimension d, rendering it the first estimator that provably satisfies this property.

上一篇:Causal Inference and Mechanism Clustering of A Mixture of Additive Noise Models

下一篇:Adapted Deep Embeddings: A Synthesis of Methods for k-Shot Inductive Transfer Learning

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...