Abstract
Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as ob ject recognition, image alignment and pose estimation from images. Regression methods typi- cally map image features (X) to continuous (e.g., pose) or discrete (e.g., ob ject category) values. A ma jor drawback of existing regression meth- ods is that samples are directly pro jected onto a subspace and hence fail to account for outliers which are common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that in existing regression methods, and discriminative methods in gen- eral, the regressor variables X are assumed to be noise free. Due to this assumption, discriminative methods experience significant degrades in performance when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of Robust Regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, multi-label classification and head pose estimation from images. Several synthetic and real world examples are used to illustrate the benefits of RR.