Home » Grand Challenges » ICME’17

The first edition of the Grand ChallengeSalient360!: Visual attention modeling for 360° images” was organized at IEEE ICME’17 (July 2017, Hong Kong, China) by the University of Nantes and Technicolor, and co-sponsored by Oculus, ITN MCSA PROVISION and ”Ouest Industries Créatives” (a Research, Education and Innovation cluster of the Region Pays de la Loire, France).

The motivation to organize this challenge was to understanding how users watch a 360° image and analyze how they scan through the content with a combination of head and eye movements. This is crucial to develop appropriate technologies for processing, encoding, delivering and rendering media content in order to provide high-quality immersive experiences to users. In addition, although a huge number of algorithms have been developed in recent years to gauge visual attention in flat-2D images and videos, attention studies in 360° scenarios were absent.

Therefore, the goals of this challenge were:

  • Align the saliency-modeling community around the application of saliency models to the recently emerged VR and 360 media technologies.
  • Produce a dataset to ensure easy and precise reproducibility of results for future saliency / scan-path computational models in line with the principles of Reproducible and Sustainable research from IEEE.
  • Set a first baseline for the taxonomy of several types of visual attention models (saliency models, importance models, saccadic models) and the correct methodology and ground-truth data to test each of them.
  • Provide first insights on the analysis of the dataset and on how observers explore 360 images.
  • Generate an appropriate framework for benchmarking of models.
  • Help in the development of modeling approaches, taking into account how to go from saliency models for 2D or low resolution images to appropriate models for omnidirectional images.

The organizers provided researchers with a database of 360° images, together with associated ground-truth data from real user experiments done at Technicolor and Nantes, including head saliency maps, head-eye saliency map, and eye-gaze scan-paths. In addition, a toolbox was also provided containing scripts to parse the ground-truth data and to evaluate the performance of the models.The dataset was divided into a training dataset that was released to the participants to train and tune their models, and a verification dataset that was used to evaluate and compare the performance of the submitted models, which was released after the model submission and was different to the training dataset.

A Special Issue the Elsevier journal “Signal Processing: Image Communications” was published, containing describing the whole Grand Challenge. For details on the workflow, dataset, toolbox, and results, please check the following paper, while the submitted models are described in individual papers indicated in the references:

  • Jesús Gutiérrez, Erwan J. David, Yashas Rai, and Patrick Le Callet, “Toolbox and dataset for the development of saliency and scan-path models for omnidirectional/360° still images”, Signal Processing: Image Communication, 2018.

A total of 33 models were submitted to the Grand Challenge, including Model Types 1, 2 and 3 for images (as described in the section “Model Submission and Evaluation“), and three winners were selected (one for each category):

  • Winner on Prediction of Head Saliency (Model Type 1): Pierre Lebreton and Alexander Raake (Zhejiang University / TU Ilmenau)
    • P. Lebreton, A. Raake, “GBVS360, BMS360, ProSal: Extending existing saliency prediction models from 2D to omnidirectional images”, Signal Processing: Image Communication, 2018.
  • Winner on Prediction of Head+Eye Saliency (Model Type 2): Mikhail Startsev and Michael Dorr (TU Munich)
    • M. Startsev, M. Dorr, “360-aware saliency estimation with conventional image saliency predictors”, Signal Processing: Image Communication, 2018.
  • Winner on Prediction of Eye-gaze Scan-paths (Model Type 3): M. Assens, K. McGuinness, X. Giro-i Nietro, N. E. O’Connor (Insight Centre for Data Analytics / Universitat Politècnica de Catalunya)
    • M. Assens, K. McGuinness, X. Giro-i Nietro, N. E. O’Connor, “Scanpath and saliency prediction on 360 degree images”, Signal Processing: Image Communication, 2018.