Abstract
We propose a very intuitive and simple approximation for the conventional spectral clustering methods. It effectively alleviates the computational burden of spectral clustering - reducing the time complexity from O(n3) to O(n2) - while capable of gaining better performance in our experiments. Specififically, by involving a more realistic and effective distance and the “k-means duality” property, our algorithm can handle datasets with complex cluster shapes, multi-scale clusters and noise. We also show its superiority in a series of its real applications on tasks including digit clustering as well as image segmentation