Abstract
Recognition problems in computer vision often benefit from a fusion of different algorithms and/or sensors, with score level fusion being among the most widely used fusion approaches. Choosing an ap- propriate score normalization technique before fusion is a fundamentally difficult problem because of the disparate nature of the underlying dis- tributions of scores for different sources of data. Further complications are introduced when one or more fusion inputs outright fail or have ad- versarial inputs, which we find in the fields of biometrics and forgery detection. Ideally a score normalization should be robust to model as- sumptions, modeling errors, and parameter estimation errors, as well as robust to algorithm failure. In this paper, we introduce the w-score, a new technique for robust recognition score normalization. We do not as- sume a match or non-match distribution, but instead suggest that the top scores of a recognition system’s non-match scores follow the statistical Extreme Value Theory, and show how to use that to provide consistent robust normalization with a strong statistical basis.