As a follow-up of the ICME’17 Salient360! Grand Challenge, a new edition was organized by the Université de Nantes at ICME’18 (July 2018, San Diego, USA) Grand Challenge “Salient360!: Visual Attention Modeling for 360º content”. While the first edition set the baseline for several types of visual attention models for 360° images, and ad-hoc methodologies and ground-truth data to test each type of model, this second edition, focused on:
- Consolidating and improving the existing modeling approaches.
- Extending the types of models, including the prediction of head trajectories (Model Type 4).
- Extending the type of input contents, including both images and videos.
In this case, the organizers provided a new release of the images dataset used in the ICME’17 Grand Challenge and a dataset of 360º videos so the participants can train and tune their algorithms before submitting them. These datasets contained the ground-truth saliency maps and scan-paths obtained from a subjective test at Universté de Nantes, in addition to scripts for parsing the data and compute metrics to compare saliency maps and scan-paths. A benchmark dataset was also generated, totally different to the training dataset and the corresponding ground-truth data was kept secret for fair benchmarking of the models.
A total of 34 models (23 for images and 11 for video, including 4 types of models) were submitted to the Grand Challenge, whose performances are reported in the “UN Salient360! Benchmark Results”. In this edition, the participants had the possibility to submit papers to be published in the ICME Proceedings after peer-review, which resulted in the publication of the following papers from two participant teams:
- Fangyi Chao, Wassim Hamidouche, Lu Zhang, Olivier Deforges, “SalGAN360: Visual Saliency Prediction on 360 Degree Images with Generative Adversarial Networks”, ICME2018.
- Pierre Lebreton, Stephan Fremerey, Alexander Raake, “V-BMS360: A video extention to the BMS360 image saliency model”, ICME2018.
The winners for each track were:
- Winner on Prediction of Head Saliency (Model Type 1):
- Images: Kao Zhang, Yingxue Zhang, Zhenzhong Chen (Wuhan University)
- Videos: Smit Thakkar, Neelanshi Varia, Manish Narwaria (Dhirubhai Ambani Institute of Information and Communication Technology)
- Winner on Prediction of Head+Eye Saliency (Model Type 2):
- Images: Fangyi Chao, Wassim Hamidouche, Lu Zhang, Olivier Deforges (IETR-INSA Rennes)
- Videos: Kao Zhang, Yingxue Zhang, Zhenzhong Chen (Wuhan University)
- Winner on Prediction of Eye-gaze Scan-paths (Model Type 3):
- Images: Yucheng Zhu, Xiongkuo Min, Zhaohui Che, Guangtao Zhai (Shanghai Jiao Tong University)
- Videos: Yucheng Zhu, Xiongkuo Min, Zhaohui Che, Guangtao Zhai (Shanghai Jiao Tong University)
- Winner on Prediction of Head-gaze Scan-paths (Model Type 4):
- Images: Yucheng Zhu, Xiongkuo Min, Zhaohui Che, Guangtao Zhai (Shanghai Jiao Tong University)
- Videos: Yucheng Zhu, Xiongkuo Min, Zhaohui Che, Guangtao Zhai (Shanghai Jiao Tong University)
Thanks to Facebook sponsorship, a total of $10,000 was distributed among the winners and other special awardees in cash prizes.