MSU Video Saliency Dataset (SAVAM)

SAVAM — Semiautomatic Visual-Attention Modeling

Introduction

The maps of attention can be applied in many fields: user interface design, computer graphics, video processing, etc. Many technologies, algorithms and filters can be improved using information about the saliency distribution. During our work we have created the database of human eye-movements captured while viewing various videos (static and dynamic scenes, shots from cinema-like films and scientific databases)

Key Features

  • High quality
    • Includes only FullHD and 4K UHDTV video sequences
    • Includes only stereoscopic video sequences
    • Eye-movements were captured with high quality eye-tracking device: SMI iViewXTM Hi-Speed 1250, with a 500 Hz frequency (20 fixations per frame)
    • Additional post-processing was applied to improve records’ accuracy
  • Diversity
    • 43 fragments of motion video from various feature movies, commercial clips and stereo video databases
    • About 13 minutes of video (19760 frames)
    • 50 observers of different ages (mostly between 18–27 years old)

Download

You can download dataset and view detailed information here.

Learn more

Written on December 12, 2020