Visual Attention for Image Quality (VAIQ) Database
Possibly the best known and most frequently used image quality databases are the following three:
- IRCCyN/IVC database: provided by the Image and Video Communications (IVC) group at the Institut de Recherche en Communications et Cybernétique de Nantes (IRCCyN) in Nantes, France (Link: http://www2.irccyn.ec-nantes.fr/ivcdb/).
- LIVE database: provided by the Laboratory for Image and Video Engineering (LIVE) at the University of Texas at Austin, USA (Link: http://live.ece.utexas.edu/research/quality/subjective.htm).
- MICT database: provided by the Media Information and Communication Technology (MICT) laboratory at the University of Toyama, Japan (Link: http://mict.eng.u-toyama.ac.jp/mict/index2.html).
Many objective image quality metrics are developed based on a subset or all of the above databases. As valuable as these databases are by providing a ground truth of subjective quality for the given set of images, they do not take into account that the saliency of visual content is usually not uniform. In other words, based on the above databases alone it is not possible to incorporate visual attention mechanisms into objective image quality metric design. To fill this gap we conducted an eye tracking experiment at the University of Western Sydney (UWS), Australia, in which we asked 15 human observers to view the reference images of the three image quality databases listed above. As a result, we obtained gaze patterns that can serve as a ground truth of saliency for the visual content of the 42 images used in the experiment.
We make these gaze patterns and saliency maps that we created available in the Visual Attention for Image Quality (VAIQ) database. It should be noted that, even though our main target group is the image quality research community, the VAIQ database may also be highly valuable to other research disciplines, such as image segmentation and saliency based image coding.
- Ulrich Engelke, Blekinge Institute of Technology, Sweden (firstname.lastname@example.org)
- Anthony Maeder, University of Western Sydney, Australia (email@example.com)
- Hans-Jürgen Zepernick, Blekinge Institute of Technology, Sweden (firstname.lastname@example.org)
The VAIQ database is described in detail in  (a copy of the paper can be obtained on request). The files that are included in the VAIQ database are the following:
- The gaze patterns of 15 observers
- The saliency maps that we created as described in 
- The saliency images as presented in 
- A 'readme' file containing all information necessary to use the VAIQ database
The VAIQ database can be downloaded here (the saliency maps have been divided into 3 sets of 14 maps each to reduce the size of each ZIP file):
- Gaze patterns in Excel spreadsheets (vaiq_gaze_points_excel.zip, approx. 16.9 MB)
- Gaze patterns in Matlab workspace (vaiq_gaze_points_matlab.zip, approx. 3.6 MB)
- Saliency maps Set 1 (vaiq_saliency_maps_1.zip, approx. 32.4 MB)
- Saliency maps Set 2 (vaiq_saliency_maps_2.zip, approx. 34.2 MB)
- Saliency maps Set 3 (vaiq_saliency_maps_3.zip, approx. 36.5 MB)
- Saliency images (vaiq_saliency_images.zip, approx. 13.6 MB)
- VAIQ readme file (vaiq_readme.zip, approx. 4 kB)
The ZIP files are password protected, please send an email to Ulrich Engelke (email@example.com) to obtain the password. A few lines about your background and the purpose of use of the VAIQ database would be highly appreciated. If you use the VAIQ database for your research, we kindly ask you to refer to our paper  and to this website  (see Reference Information below).
If you use the VAIQ database for your research, please refer to our paper  and to this website  as follows:
 U. Engelke, A. J. Maeder, and H.-J. Zepernick, "Visual Attention Modeling for Subjective Image Quality Databases," in Proc. of IEEE Int. Workshop on Multimedia Signal Processing (MMSP), October 2009.
 U. Engelke, A. J. Maeder, and H.-J. Zepernick, "Visual Attention for Image Quality Database," http://www.bth.se/tek/rcg.nsf/pages/vaiq-db, 2009.
Other related publications:
 U. Engelke, H. Liu, H.-J. Zepernick, I. Heynderickx, and A. Maeder, "Comparing Two Eye Tracking Databases: The Effect of Experimental Setup and Image Presentation Time on the Creation of Saliency Maps," Proc. of IEEE Picture Coding Symposium (PCS), December 2010.
 U. Engelke, A. J. Maeder, and H.-J. Zepernick, "Analysing Inter-Observer Saliency Variations in Task-Free Viewing of Natural Images," Proc. of IEEE Int. Conference on Image Processing (ICIP), September 2010.
 U. Engelke, A. J. Maeder, and H.-J. Zepernick, "The Effect of Spatial Distortion Distributions on Human Viewing Behaviour when Judging Image Quality," Proc. of European Conference on Visual Perception (ECVP), pp. 22, August 2009.
 U. Engelke, H.-J. Zepernick, and A. J. Maeder, "Visual Attention Modeling: Region-of-Interest versus Fixation Patterns," in Proc. of IEEE Picture Coding Symposium (PCS), May 2009.
Permission is hereby granted, without written agreement and without license or royalty fees, to use, copy, modify, and distribute the data provided and its documentation for research purpose only. The data provided may not be commercially distributed. In no event shall the Blekinge Institute of Technology and the University of Western Sydney be liable to any party for direct, indirect, special, incidental, or consequential damages arising out of the use of the data and its documentation. The Blekinge Institute of Technology and the University of Western Sydney specifically disclaim any warranties. The data provided is on an "as is" basis and the Blekinge Institute of Technology and the University of Western Sydney have no obligation to provide maintenance, support, updates, enhancements, or modifications.