The following datasets and tools have been made available to those interested in developing and benchmarking their models:
- Training dataset: A dataset containing 360º images and videos and their corresponding ground-truth saliency maps and scan-paths (according to the different types of models), so you can train and tune your algorithms as necessary, and may also compute the benchmark scores as a reference for yourself.
- Benchmark dataset: A dataset containing 360º images and videos that is used for evaluating the performance of the models (totally different from the training dataset). Thus, the corresponding ground-truth data of head and eye movements is kept secret for fair benchmarking (avoiding the possibility of training models with this information).
- Toolbox: Scripts to parse the provided data and to compute metrics for comparing saliency maps and scan-paths to assess the performance of the models