Abstract
Robustness recently becomes one of the major concerns among machine learning community, since
learning algorithms are usually vulnerable to outliers or corruptions. Motivated by such trend and
needs, we pursue robustness in semi-definite programming (SDP) in this paper. Specifically, this is
done by replacing the commonly used squared loss
with the more robust `1-loss in the low-rank SDP.
However, the resulting objective becomes neither
convex nor smooth. As no existing algorithms can
be applied, we design an efficient algorithm, based
on majorization-minimization, to optimize the objective. The proposed algorithm not only has cheap
iterations and low space complexity, but also theoretically converges to some critical points. Finally,
empirical study shows that the new objective armed
with proposed algorithm outperforms state-of-thearts in terms of both speed and accuracy.