Abstract
We proposed a computational visual saliency modeling tech- nique. The proposed technique makes use of a color co-occurrence his- togram (CCH) that captures not only “how many” but also “where and how” image pixels are composed into a visually perceivable image. Hence the CCH encodes image saliency information that is usually perceived as the discontinuity between an image region or ob ject and its surrounding. The proposed technique has a number of distinctive characteristics: It is fast, discriminative, tolerant to image scale variation, and involves min- imal parameter tuning. Experiments over benchmarking datasets show that it predicts fixational eye tracking points accurately and a superior AUC of 71.25 is obtained.