资源论文Linear-Time Outlier Detection via Sensitivity

Linear-Time Outlier Detection via Sensitivity

2019-11-22 | |  77 |   53 |   0

Abstract

Outliers are ubiquitous in modern data sets. Distance-based techniques are a popular nonparametric approach to outlier detection as they require no prior assumptions on the data generating distribution and are simple to implement. Scaling these techniques to massive data sets without sacrificing accuracy is a challenging task. We propose a novel algorithm based on the intuition that outliers have a significant influence on the quality of divergence-based clustering solutions. We propose sensitivity – the worst-case impact of a data point on the clustering objective – as a measure of outlierness. We then prove that influence – a (non-trivial) upper-bound on the sensitivity can be computed by a simple linear time algorithm. To scale beyond a single machine, we propose a communication efficient distributed algorithm. In an extensive experimental evaluation, we demonstrate the effectiveness and establish the statistical significance of the proposed approach. In particular, it outperforms the most popular distance-based approaches while being several orders of magnitude faster.

上一篇:Natural Supervised Hashing

下一篇:Avoiding Optimal Mean Robust PCA/2DPCAwith Non-Greedy`1-Norm Maximization

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...