Abstract
Low-rank methods for semi-definite programming
(SDP) have gained a lot of interest recently, especially in machine learning applications. Their analysis often involves determinant-based or Schattennorm penalties, which are difficult to implement in
practice due to high computational efforts. In this
paper, we propose Entropy-Penalized Semi-Definite
Programming (EP-SDP)1
, which provides a unified
framework for a broad class of penalty functions
used in practice to promote a low-rank solution. We
show that EP-SDP problems admit an efficient numerical algorithm, having (almost) linear time complexity of the gradient computation; this makes it
useful for many machine learning and optimization
problems. We illustrate the practical efficiency of
our approach on several combinatorial optimization
and machine learning problems