File(s) under embargo

Reason: 24 month embargo requested by publisher

7

month(s)

4

day(s)

until file(s) become available

A Low-complexity Wavelet-based Visual Saliency Model to Predict Fixations

conference contribution
posted on 22.03.2022, 13:38 by Manjula Narayanaswamy, Yafan Zhao, Wai Keung FungWai Keung Fung, Nazila Fough
A low-complexity wavelet-based visual saliency model to predict the regions of human eye fixations in images using low-level features is proposed. Unlike the existing wavelet-based saliency detection models, the proposed model requires only two channels - luminance (Y) and chrominance (Cr) in YCbCr colour space for saliency computation. These two channels are decomposed to their lowest resolution using Discrete Wavelet Transform (DWT) to extract local contrast features at multiple scales. These features are integrated at multiple levels using 2D entropy based combination scheme to derive a combined map. The combined map is normalised and enhanced using natural logarithm transformation to derive a final saliency map. The experimental results show that the proposed model has achieved better prediction accuracy with significant complexity reduction compared to the existing benchmark models over two large public image datasets.

History

Presented at

Conference paper published in proceedings of 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS)

Published in

2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS 2020)

Publisher

IEEE

Version

AM (Accepted Manuscript)

Citation

M. Narayanaswamy, Y. Zhao, W. K. Fung and N. Fough (2020) "A Low-complexity Wavelet-based Visual Saliency Model to Predict Fixations," 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS), Glasgow, Scotland, UK, 2020, pp. 1-4, doi: 10.1109/ICECS49266.2020.9294905.

ISBN

978-1-7281-6044-3

Cardiff Met Affiliation

  • Cardiff School of Technologies

Cardiff Met Authors

Wai Keung Fung

Copyright Holder

© The Authors

Publisher Rights Statement

© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Language

en