Abstract
Late fusion addresses the problem of combining the prediction scores of multiple classififiers, in which each score is predicted by a classififier trained with a specifific feature. However, the existing methods generally use a fifixed fusion weight for all the scores of a classififier, and thus fail to optimally determine the fusion weight for the individual samples. In this paper, we propose a sample-specifific late fusion method to address this issue. Specififically, we cast the problem into an information propagation process which propagates the fusion weights learned on the labeled samples to individual unlabeled samples, while enforcing that positive samples have higher fusion scores than negative samples. In this process, we identify the optimal fusion weights for each sample and push positive samples to top positions in the fusion score rank list. We formulate our problem as a L∞ norm constrained optimization problem and apply the Alternating Direction Method of Multipliers for the optimization. Extensive experiment results on various visual categorization tasks show that the proposed method consistently and signifificantly beats the state-of-the-art late fusion methods. To the best knowledge, this is the fifirst method supporting sample-specifific fusion weight learning