A low-complexity wavelet-based visual saliency model to predict the regions of human eye fixations in images using low-level features is proposed. Unlike the existing wavelet-based saliency detection models, the proposed model requires only two channels - luminance (Y) and chrominance (Cr) in YCbCr colour space for saliency computation. These two channels are decomposed to their lowest resolution using Discrete Wavelet Transform (DWT) to extract local contrast features at multiple scales. These features are integrated at multiple levels using 2D entropy based combination scheme to derive a combined map. The combined map is normalised and enhanced using natural logarithm transformation to derive a final saliency map. The experimental results show that the proposed model has achieved better prediction accuracy with significant complexity reduction compared to the existing benchmark models over two large public image datasets.
History
Presented at
Conference paper published in proceedings of 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS)
2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS 2020)
Publisher
IEEE
Version
AM (Accepted Manuscript)
Citation
M. Narayanaswamy, Y. Zhao, W. K. Fung and N. Fough (2020) "A Low-complexity Wavelet-based Visual Saliency Model to Predict Fixations," 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS), Glasgow, Scotland, UK, 2020, pp. 1-4, doi: 10.1109/ICECS49266.2020.9294905.