Abstract
Multi-view subspace clustering aims to partition a set of
multi-source data into their underlying groups. To boost the
performance of multi-view clustering, numerous subspace
learning algorithms have been developed in recent years,
but with rare exploitation of the representation complementarity between different views as well as the indicator consistency among the representations, let alone considering
them simultaneously. In this paper, we propose a novel
multi-view subspace clustering model that attempts to harness the complementary information between different representations by introducing a novel position-aware exclusivity term. Meanwhile, a consistency term is employed to
make these complementary representations to further have
a common indicator. We formulate the above concerns into a unified optimization framework. Experimental results
on several benchmark datasets are conducted to reveal the
effectiveness of our algorithm over other state-of-the-arts.