Abstract
Multi-view multi-task learning has recently attracted more and more attention due to its dualheterogeneity, i.e., each task has heterogeneous features from multiple views, and probably correlates
with other tasks via common views. Existing methods usually suffer from three problems: 1) lack
the ability to eliminate noisy features, 2) hold a
strict assumption on view consistency and 3) ignore the possible existence of task-view outliers.
To overcome these limitations, we propose a robust method with joint group-sparsity by decomposing feature parameters into a sum of two components, in which one saves relevant features (for
Problem 1) and flexible view consistency (for Problem 2), while the other detects task-view outliers
(for Problem 3). With a global convergence property, we develop a fast algorithm to solve the optimization problem in a linear time complexity w.r.t.
the number of features and labeled samples. Extensive experiments on various synthetic and realworld datasets demonstrate its effectiveness