Home » Datasets & Toolbox » Training Dataset

Training Dataset

The training dataset, to freely use it to train and tune your algorithms as necessary before benchmarking, contains:

  • 85 equirectangular images.
  • 19 equirectangular videos.
  • Ground-truth saliency maps and scan-paths (according to the different types of models) obtained from subjective experiments (free exploration + tracking of eye and head movements).

For more details and for citing in case you use these datasets:

  • Images:
    • Yashas Rai, Patrick Le Callet and Philippe Guillotel, “Which saliency weighting for omni directional image quality assessment?”, Proceedings of the IEEE Ninth International Conference on Quality of Multimedia Experience (QoMEX’17), Erfurt, Germany, pp. 1-6, Jun. 2017.
    • Yashas Rai, Jesús Gutiérrez, and Patrick Le Callet, “A Dataset of Head and Eye Movements for 360 Degree Images”, Proceedings of the 8th ACM on Multimedia Systems Conference (MMSys’17), Taipei, Taiwan, pp. 205-210, Jun. 2017.
  • Videos:
    • Erwan J. David, Jesús Gutiérrez, Antoine Coutrot, Matthieu Perreira Da Silva, and Patrick Le Callet, “A Dataset of Head and Eye Movements for 360° Videos”, Proceedings of the 9th ACM on Multimedia Systems Conference (MMSys’18), Amsterdam, Netherlands, Jun. 2018.

You can download the datasets using a FTP client and the following information:

Please read the “Readme” files in the ftp (within the “images” and “videos” folders) to know the details about the formats of the files, how they were generated, and the experiment carried out to obtain the raw data.

Note: These datasets were used in the ICME’18 Grand Challenge. However, please be aware that the images dataset is different from the dataset used in ICME’17 Grand Challenge. Although the stimuli are the same, there have been modifications that resulted in different saliency maps and scan-paths. Therefore, you should use this new release of the dataset and not the one from 2017.