Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Bhowmik, D.; Oakes, M.; Abhayaratne, C. (2016)
Publisher: Institute of Electrical and Electronics Engineers
Languages: English
Types: Article

Classified by OpenAIRE into

Imperceptibility and robustness are two complementary but fundamental requirements of any watermarking algorithm. Low strength watermarking yields high imperceptibility but exhibits poor robustness. High strength watermarking schemes achieve good robustness but often infuse distortions resulting in poor visual quality in host media. If distortion due to high strength watermarking can avoid visually attentive regions, such distortions are unlikely to be noticeable to any viewer. In this paper, we exploit this concept and propose a novel visual attention-based highly robust image watermarking methodology by embedding lower and higher strength watermarks in visually salient and non-salient regions, respectively. A new low complexity wavelet domain visual attention model is proposed that allows us to design new robust watermarking algorithms. The proposed new saliency model outperforms the state-of-the-art method in joint saliency detection and low computational complexity performances. In evaluating watermarking performances, the proposed blind and non-blind algorithms exhibit increased robustness to various natural image processing and filtering attacks with minimal or no effect on image quality, as verified by both subjective and objective visual quality evaluation. Up to 25% and 40% improvement against JPEG2000 compression and common filtering attacks, respectively, are reported against the existing algorithms that do not use a visual attention model.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] A. M. Treisman and G. Gelade, “A feature-integration theory of attention,” Cognitive Psychology, vol. 12, no. 1, pp. 97 - 136, 1980.
    • [2] O. Hikosaka, S. Miyauchi, and S. Shimojo, “Orienting of spatial attention-its reflexive, compensatory, and voluntary mechanisms,” Cognitive Brain Research, vol. 5, no. 1-2, pp. 1-9, 1996.
    • [3] L. Itti and C. Koch, “Computational modelling of visual attention,” Nature Reviews Neuroscience, vol. 2, no. 3, pp. 194-203, Mar 2001.
    • [4] R. Desimone and J. Duncan, “Neural mechanisms of selective visual attention,” Annual review of neuroscience, vol. 18, no. 1, pp. 193-222, 1995.
    • [5] A. Borji and L. Itti, “State-of-the-art in visual attention modeling,” IEEE Transactions on Pattern Analalysis Machine Intelligence, vol. 35, no. 1, pp. 185-207, Jan. 2013.
    • [6] M. Carrasco, “Visual attention: The past 25 years,” Vision Research, vol. 51, no. 13, pp. 1484 - 1525, 2011, vision Research 50th Anniversary Issue: Part 2. [Online]. Available: http://www.sciencedirect.com/science/ article/pii/S0042698911001544
    • [7] S. Frintrop, E. Rome, and H. I. Christensen, “Computational visual attention systems and their cognitive foundations: A survey,” ACM Trans. Appl. Percept., vol. 7, no. 1, pp. 6:1-6:39, Jan. 2010. [Online]. Available: http://doi.acm.org/10.1145/1658349.1658355
    • [8] H. Liu and I. Heynderickx, “Visual attention in objective image quality assessment: Based on eye-tracking data,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 7, pp. 971-982, July 2011.
    • [9] S. Frintrop, “General object tracking with a component-based target descriptor,” in Robotics and Automation (ICRA), 2010 IEEE International Conference on, May 2010, pp. 4531-4536.
    • [10] A. Mishra, Y. Aloimonos, and C. Fermuller, “Active segmentation for robotics,” in IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems, (IROS 2009), Oct 2009, pp. 3133-3139.
    • [11] N. Jacobson, Y.-L. Lee, V. Mahadevan, N. Vasconcelos, and T. Nguyen, “A novel approach to fruc using discriminant saliency and frame segmentation,” IEEE Transactions on Image Processing, vol. 19, no. 11, pp. 2924-2934, Nov 2010.
    • [12] D. Que, L. Zhang, L. Lu, and L. Shi, “A ROI image watermarking algorithm based on lifting wavelet transform,” in Proc. International Conference on Signal Processing, vol. 4, 2006, pp. 16-20.
    • [13] R. Ni and Q. Ruan, “Region of interest watermarking based on fractal dimension,” in Proc. International Conference on Pattern Recognition, 2006, pp. 934-937.
    • [14] R. Wang, Q. Cheng, and T. Huang, “Identify regions of interest (ROI) for video watermark embedment with principle component analysis,” in Proc. ACM International Conference on Multimedia, 2000, pp. 459-461.
    • [15] C. Yiping, Z. Yin, Z. Sanyuan, and Y. Xiuzi, “Region of interest fragile watermarking for image authentication,” in International MultiSymposiums on Computer and Computational Sciences (IMSCCS), vol. 1, 2006, pp. 726-731.
    • [16] L. Tian, N. Zheng, J. Xue, C. Li, and X. Wang, “An integrated visual saliency-based watermarking approach for synchronous image authentication and copyright protection,” Image Communication, vol. 26, no. 8-9, pp. 427-437, Oct. 2011.
    • [17] H. K. Lee, H. J. Kim, S. G. Kwon, and J. K. Lee, “ROI medical image watermarking using dwt and bit-plane,” in Proc. Asia-Pacific Conference on Communications, 2005, pp. 512-515.
    • [18] A. Wakatani, “Digital watermarking for ROI medical images by using compressed signature image,” in Proc. International Conference on System Sciences (2002), 2002, pp. 2043-2048.
    • [19] B. Ma, C. L. Li, Y. H. Wang, and X. Bai, “Salient region detection for biometric watermarking,” Computer Vision for Multimedia Applications: Methods and Solutions, p. 218, 2011.
    • [20] A. Sur, S. Sagar, R. Pal, P. Mitra, and J. Mukhopadhyay, “A new image watermarking scheme using saliency based visual attention model,” in India Conference (INDICON), 2009 Annual IEEE, Dec 2009, pp. 1-4.
    • [21] J. Shi, Q. Yan, H. Shi, and Y. Wang, “Visual attention based image zero watermark scheme with ensemble similarity,” in Wireless Communications Signal Processing (WCSP), 2013 International Conference on, Oct 2013, pp. 1-5.
    • [22] H. G. Koumaras, “Subjective video quality assessment methods for multimedia applications,” Geneva, Switzerland, Tech. Rep. ITU-R BT.500- 11, April 2008.
    • [23] M. Oakes, D. Bhowmik, and C. Abhayaratne, “Visual attention-based watermarking,” in IEEE International Symposium on Circuits and Systems (ISCAS),, 2011, pp. 2653-2656.
    • [24] K. Koch, J. McLean, R. Segev, M. A. Freed, M. J. Berry, V. Balasubramanian, and P. Sterling, “How much the eye tells the brain,” Current Biology, vol. 16, no. 14, pp. 1428-1434, 2006.
    • [25] J. M. Wolfe and T. S. Horowitz, “What attributes guide the deployment of visual attention and how do they do it?” Natural Review Neuroscience, vol. 5, no. 1, pp. 1-7, 2004.
    • [26] A. M. Treisman and G. Gelade, “A feature-integration theory of attention,” Cognitive Psychology, vol. 12, no. 1, pp. 97 - 136, 1980.
    • [27] Y. Sun and R. Fisher, “Object-based visual attention for computer vision,” Artificial Intelligence, vol. 146, no. 1, pp. 77-123, 2003.
    • [28] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254 -1259, Nov. 1998.
    • [29] R. Achanta, S. Hemami, F. Estrada, and S. S u¨sstrunk, “Frequencytuned Salient Region Detection,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 1597 -1604.
    • [30] Z. Q. Li, T. Fang, and H. Huo, “A saliency model based on wavelet transform and visual attention,” Science China Information Sciences, vol. 53, no. 4, pp. 738-751, 2010.
    • [31] S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2010, pp. 2376-2383.
    • [32] E. Erdem and A. Erdem, “Visual saliency estimation by nonlinearly integrating features using region covariances,” Journal of Vision, vol. 13, no. 4, pp. 1-20, 2013.
    • [33] C. Ngau, L. Ang, and K. Seng, “Bottom-up visual saliency map using wavelet transform domain,” in Proc. IEEE International Conference on Computer Science and Information Technology (ICCSIT), vol. 1, July 2010, pp. 692-695.
    • [34] M. Cerf, J. Harel, W. Einhuser, and C. Koch, “Predicting human gaze using low-level saliency combined with face detection.” in Advances in Neural Information Processing Systems, vol. 20, 2007, pp. 241-248.
    • [35] L. Chen, X. Xie, X. Fan, W. Ma, H. Zhang, and H. Zhou, “A visual attention model for adapting images on small displays,” Multimedia Systems, vol. 9, no. 4, pp. 353-364, Oct. 2003.
    • [36] W. J. Won, M. Lee, and J. Son, “Skin color saliency map model,” in Proc. International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTICON), vol. 2, 2009, pp. 1050-1053.
    • [37] Y. Zhai and M. Shah, “Visual attention detection in video sequences using spatiotemporal cues,” in Proc. ACM International Conference on Multimedia, 2006, pp. 815-824.
    • [38] F. Stentiford, “An estimator for visual attention through competitive novelty with application to image compression,” in Proc. Picture Coding Symposium, Arpil 2001, pp. 101-104.
    • [39] J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Advances in Neural Information Processing Systems, 2007, pp. 545- 552.
    • [40] X. Hou and L. Zhang, “Saliency detection: A spectral residual approach,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007, pp. 1-8.
    • [41] G. Kootstra, A. Nederveen, and B. D. Boer, “Paying attention to symmetry,” in Proc. British Machine Vision Conference (BMVC), 2008, pp. 1115-1125.
    • [42] N. Riche, M. Mancas, B. Gosselin, and T. Dutoit, “Rare: a new bottomup saliency model,” in Proc. IEEE International Conference on Image Processing (ICIP), 2012, pp. 1-4.
    • [43] D. S. Taubman and M. W. Marcellin, JPEG2000 Image Compression Fundamentals, Standards and Practice. USA: Springer, 2002.
    • [44] P. Chen and J. W. Woods, “Bidirectional MC-EZBC with lifting implementation,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 14, no. 10, pp. 1183-1194, 2004.
    • [45] G. Bhatnagar and Q.M. J. Wuand B. Raman, “Robust gray-scale logo watermarking in wavelet domain,” Computers & Electrical Engineering, 2012.
    • [46] A. Piper, R. Safavi-Naini, and A. Mertins, “Resolution and quality scalable spread spectrum image watermarking,” in Proc. 7th workshop on Multimedia and Security: MM&Sec'05, 2005, pp. 79-90.
    • [47] D. Bhowmik and C. Abhayaratne, “A generalised model for distortion performance analysis of wavelet based watermarking,” in Proc. Int'l Workshop on Digital Watermarking (IWDW '08), Lect. Notes in Comp. Sci. (LNCS), vol. 5450, 2008, pp. 363-378.
    • [48] M. R. Soheili, “Blind Wavelet Based Logo Watermarking Resisting to Cropping,” in Proc. 20th International Conference on Pattern Recognition, 2010, pp. 1449-1452.
    • [49] D. Bhowmik and C. Abhayaratne, “Morphological wavelet domain image watermarking,” in Proc. European Signal Processing Conference (EUSIPCO), 2007, pp. 2539-2543.
    • [50] C. Abhayaratne and D. Bhowmik, “Scalable watermark extraction for real-time authentication of JPEG2000 images,” Journal of Real-Time Image Processing, vol. 6, no. 4, p. 19 pages, 2011.
    • [51] D. Bhowmik and C. Abhayaratne, “Quality scalability aware watermarking for visual content,” IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5158-5172, 2016.
    • [52] X. C. Feng and Y. Yang, “A new watermarking method based on DWT,” in Proc. Int'l Conf. on Computational Intelligence and Security, Lect. Notes in Comp. Sci. (LNCS), vol. 3802, 2005, pp. 1122-1126.
    • [53] X. Xia, C. G. Boncelet, and G. R. Arce, “Wavelet transform based watermark for digital images,” Optic Express, vol. 3, no. 12, pp. 497- 511, Dec. 1998.
    • [54] L. Xie and G. R. Arce, “Joint wavelet compression and authentication watermarking,” in Proc. IEEE ICIP, vol. 2, 1998, pp. 427-431.
    • [55] M. Barni, F. Bartolini, and A. Piva, “Improved wavelet-based watermarking through pixel-wise masking,” IEEE Trans. Image Processing, vol. 10, no. 5, pp. 783-791, May 2001.
    • [56] D. Kundur and D. Hatzinakos, “Toward robust logo watermarking using multiresolution image fusion principles,” IEEE Trans. Multimedia, vol. 6, no. 1, pp. 185-198, Feb. 2004.
    • [57] C. Jin and J. Peng, “A robust wavelet-based blind digital watermarking algorithm,” International Journal of Information Technology, vol. 5, no. 2, pp. 358-363, 2006.
    • [58] R. S. Shekhawat, V. S. Rao, and V. K. Srivastava, “A biorthogonal wavelet transform based robust watermarking scheme,” in Proc. IEEE Conference on Electrical, Electronics and Computer Science (SCEECS), 2012, pp. 1-4.
    • [59] J. R. Kim and Y. S. Moon, “A robust wavelet-based digital watermarking using level-adaptive thresholding,” in Proc. IEEE ICIP, vol. 2, 1999, pp. 226-230.
    • [60] S. Marusic, D. B. H. Tay, G. Deng, and P. Marimuthu, “A study of biorthogonal wavelets in digital watermarking,” in Proc. IEEE ICIP, vol. 3, Sept. 2003, pp. II-463-6.
    • [61] Z. Zhang and Y. L. Mo, “Embedding strategy of image watermarking in wavelet transform domain,” in Proc. SPIE Image Compression and Encryption Tech., vol. 4551-1, 2001, pp. 127-131.
    • [62] D. Bhowmik and C. Abhayaratne, “On Robustness Against JPEG2000: A Performance Evaluation of Wavelet-Based Watermarking Techniques,” Multimedia Syst., vol. 20, no. 2, pp. 239-252, 2014.
    • [63] --, “Embedding distortion modeling for non-orthonormal wavelet based watermarking schemes,” in Proc. SPIE Wavelet App. in Industrial Processing VI, vol. 7248, 2009, p. 72480K (12 Pages).
    • [64] F. Huo and X. Gao, “A wavelet based image watermarking scheme,” in Proc. IEEE ICIP, 2006, pp. 2573-2576.
    • [65] N. Dey, M. Pal, and A. Das, “A session based blind watermarking technique within the nroi of retinal fundus images for authentication using dwt, spread spectrum and harris corner detection,” International Journal of Modern Engineering Research, vol. 2, pp. 749-757, 2012.
    • [66] H. A. Abdallah, M. M. Hadhoud, and A. A. Shaalan, “A blind spread spectrum wavelet based image watermarking algorithm,” in Proc. International Conference on Computer Engineering Systems, 2009, pp. 251-256.
    • [67] T.-S. Chen, J. Chen, and J.-G. Chen, “A simple and efficient watermarking technique based on JPEG2000 codec,” in Proc. Int'l Symp. on Multimedia Software Eng., 2003, pp. 80-87.
    • [68] H.A.Abdallah, M. M. Hadhoud, A. A. Shaalan, and F. E. A. El-samie, “Blind wavelet-based image watermarking,” International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 4, no. 1, pp. 358-363, March 2011.
    • [69] H. Wilson, “Psychophysical models of spatial vision and hyper-acuity,” Spatial Vision, vol. 10, pp. 64-81, 1991.
    • [70] N. Riche, M. Duvinage, M. Mancas, B. Gosselin, and T. Dutoit, “A study of parameters affecting visual saliency assessment,” Computing Research Repository, vol. 1307, 2013.
    • [71] Q. Gong and H. Shen, “Toward blind logo watermarking in JPEGcompressed images,” in Proc. Int'l Conf. on Parallel and Distributed Comp., Appl. and Tech., (PDCAT), 2005, pp. 1058-1062.
    • [72] V. Saxena, M. Gupta, and D. T. Gupta, “A wavelet-based watermarking scheme for color images,” The IUP Journal of Telecommunications, vol. 5, no. 2, pp. 56-66, Oct. 2013.
    • [73] C. Jin and J. Peng, “A robust wavelet-based blind digital watermarking algorithm,” Information Technology Journal, vol. 5, no. 2, pp. 358-363, 2006.
    • [74] D. Kundur and D. Hatzinakos, “Digital watermarking using multiresolution wavelet decomposition,” in Proc. IEEE ICASSP, vol. 5, 1998, pp. 2969-2972.
    • [75] V. S. Verma and J. R. Kumar, “Improved watermarking technique based on significant difference of lifting wavelet coefficients,” Signal, Image and Video Processing, pp. 1-8, 2014.
    • [76] P. Meerwald, “Quantization watermarking in the JPEG2000 coding pipeline,” in Proc. Int'l Working Conf. on Comms. and Multimedia Security, 2001, pp. 69-79.
    • [77] D. Aggarwal and K. S. Dhindsa, “Effect of embedding watermark on compression of the digital images,” Computing Research Repository, vol. 1002, 2010.
    • [78] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, pp. 600-612, April 2004.
    • [79] A. B. Watson, “Visual optimization of DCT quantization matrices for individual images,” in American Institute of Aeronautics and Astronautics (AIAA), vol. 9, 1993, pp. 286-291.
    • [80] T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H. Y. Shum, “Learning to detect a salient object,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 2, pp. 353-367, 2011.
    • [81] S. Pereira, S. Voloshynovskiy, M. Madueno, S. M.-Maillet, and T. Pun, “Second generation benchmarking and application oriented evaluation,” in Proc. Int'l. Information Hiding Workshop, Lect. Notes in Comp. Sci. (LNCS), vol. 2137, 2001, pp. 340-353.
  • No related research data.
  • No similar publications.

Share - Bookmark

Funded by projects

  • RCUK | Programmable embedded plat...

Cite this article