A Low-complexity Wavelet-based Visual Saliency Model to Predict Fixations.pdf (735.09 kB)
A Low-complexity Wavelet-based Visual Saliency Model to Predict Fixations
conference contributionposted on 2022-03-22, 13:38 authored by Manjula Narayanaswamy, Yafan Zhao, Wai Keung FungWai Keung Fung, Nazila Fough
A low-complexity wavelet-based visual saliency model to predict the regions of human eye fixations in images using low-level features is proposed. Unlike the existing wavelet-based saliency detection models, the proposed model requires only two channels - luminance (Y) and chrominance (Cr) in YCbCr colour space for saliency computation. These two channels are decomposed to their lowest resolution using Discrete Wavelet Transform (DWT) to extract local contrast features at multiple scales. These features are integrated at multiple levels using 2D entropy based combination scheme to derive a combined map. The combined map is normalised and enhanced using natural logarithm transformation to derive a final saliency map. The experimental results show that the proposed model has achieved better prediction accuracy with significant complexity reduction compared to the existing benchmark models over two large public image datasets.
Presented atConference paper published in proceedings of 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS)
Published in2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS 2020)
- AM (Accepted Manuscript)
CitationM. Narayanaswamy, Y. Zhao, W. K. Fung and N. Fough (2020) "A Low-complexity Wavelet-based Visual Saliency Model to Predict Fixations," 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS), Glasgow, Scotland, UK, 2020, pp. 1-4, doi: 10.1109/ICECS49266.2020.9294905.
Cardiff Met Affiliation
- Cardiff School of Technologies
Cardiff Met AuthorsWai Keung Fung
- © The Authors
Publisher Rights Statement© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.