This website provides a benchmark platform, datasets and toolboxes for evaluating and comparing the performance of models for saliency and scanpath prediction for 360-degrees images and videos.
This platform extends the efforts carried out with the organization of the two Salient360! Grand Challenges at ICME’17 and ICME’18, providing a continuous benchmark of visual attention models for 360-degree content. This activity aims at supporting the research on understanding how users watch and explore 360° content and on modelling visual attention, which is crucial to develop appropriate rendering, coding and streaming techniques to create a good experience for the users.
The first results, coming from the performance of the models submitted to the ICME’18 Grand Challenge Salient360!, have been published. Do you think you can beat them? Then, check the instructions to submit your model for benchmark in « Model Submission and evaluation ».
To cite this website please use the following:
- J. Gutiérrez, E. David, A. Coutrot, M. Perreira Da Silva, P. Le Callet, “Introducing UN Salient360! Benchmark: A platform for evaluating visual attention models for 360 contents”, International Conference on Quality of Multimedia Experience (QoMEX), Sardinia, Italy, May. 2018.
For any questions, please do not hesitate to contact us sending an email to: email@example.com.