File(s) under embargo
Reason: 24 month embargo requested by publisher
until file(s) become available
A Low-complexity Wavelet-based Visual Saliency Model to Predict Fixations
conference contributionposted on 22.03.2022, 13:38 authored by Manjula Narayanaswamy, Yafan Zhao, Wai Keung FungWai Keung Fung, Nazila Fough
A low-complexity wavelet-based visual saliency model to predict the regions of human eye fixations in images using low-level features is proposed. Unlike the existing wavelet-based saliency detection models, the proposed model requires only two channels - luminance (Y) and chrominance (Cr) in YCbCr colour space for saliency computation. These two channels are decomposed to their lowest resolution using Discrete Wavelet Transform (DWT) to extract local contrast features at multiple scales. These features are integrated at multiple levels using 2D entropy based combination scheme to derive a combined map. The combined map is normalised and enhanced using natural logarithm transformation to derive a final saliency map. The experimental results show that the proposed model has achieved better prediction accuracy with significant complexity reduction compared to the existing benchmark models over two large public image datasets.
Presented atConference paper published in proceedings of 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS)
Published in2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS 2020)
VersionAM (Accepted Manuscript)
CitationM. Narayanaswamy, Y. Zhao, W. K. Fung and N. Fough (2020) "A Low-complexity Wavelet-based Visual Saliency Model to Predict Fixations," 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS), Glasgow, Scotland, UK, 2020, pp. 1-4, doi: 10.1109/ICECS49266.2020.9294905.
Cardiff Met Affiliation
- Cardiff School of Technologies