资源论文Towards K-means-friendly Spaces: Simultaneous Deep Learning and Clustering

Towards K-means-friendly Spaces: Simultaneous Deep Learning and Clustering

2020-03-10 | |  72 |   84 |   0

Abstract

Most learning approaches treat dimensionality reduction (DR) and clustering separately (i.e., se quentially), but recent research has shown that optimizing the two tasks jointly can substantially improve the performance of both. The premise behind the latter genre is that the data samples a obtained via linear transformation of latent repre sentations that are easy to cluster; but in practi the transformation from the latent space to the data can be more complicated. In this work, we assume that this transformation is an unknown and possibly nonlinear function. To recover the ‘clustering-friendly’ latent representations and t better cluster the data, we propose a joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN). The motivation is to keep the advantages of jointly optimizing the two tasks, while exploit ing the deep neural network’s ability to approximate any nonlinear function. This way, the proposed approach can work well for a broad class of generative models. Towards this end, we carefully design the DNN structure and the associated joint optimization criterion, and propose an effective and scalable algorithm to handle the for mulated optimization problem. Experiments using different real datasets are employed to showcase the effectiveness of the proposed approach.

上一篇:Learning Latent Space Models with Angular Constraints

下一篇:Axiomatic Attribution for Deep Networks

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...