Abstract
Since human observers are the ultimate receivers of digital images, image quality metrics should be designed from
a human-oriented perspective. Conventionally, a number
of full-reference image quality assessment (FR-IQA) methods adopted various computational models of the human
visual system (HVS) from psychological vision science research. In this paper, we propose a novel convolutional neural networks (CNN) based FR-IQA model, named Deep Image Quality Assessment (DeepQA), where the behavior of
the HVS is learned from the underlying data distribution of
IQA databases. Different from previous studies, our model
seeks the optimal visual weight based on understanding of
database information itself without any prior knowledge of
the HVS. Through the experiments, we show that the predicted visual sensitivity maps agree with the human subjective opinions. In addition, DeepQA achieves the state-ofthe-art prediction accuracy among FR-IQA models