Actions in the eye: Dynamic gaze datasets and learnt saliency models for visual recognition

S Mathe, C Sminchisescu - IEEE transactions on pattern …, 2014 - ieeexplore.ieee.org
Systems based on bag-of-words models from image features collected at maxima of sparse
interest point operators have been used successfully for both computer visual object and …

Dynamic eye movement datasets and learnt saliency models for visual action recognition

S Mathe, C Sminchisescu - Computer Vision–ECCV 2012: 12th European …, 2012 - Springer
Abstract Systems based on bag-of-words models operating on image features collected at
maxima of sparse interest point operators have been extremely successful for both computer …

360-degree video gaze behaviour: A ground-truth data set and a classification algorithm for eye movements

I Agtzidis, M Startsev, M Dorr - Proceedings of the 27th ACM international …, 2019 - dl.acm.org
Eye tracking and the analysis of gaze behaviour are established tools to produce insights
into how humans observe their surroundings and consume visual multimedia content. For …

OpenNEEDS: A dataset of gaze, head, hand, and scene signals during exploration in open-ended VR environments

KJ Emery, M Zannoli, J Warren, L Xiao… - ACM Symposium on Eye …, 2021 - dl.acm.org
We present OpenNEEDS, the first large-scale, high frame rate, comprehensive, and open-
source dataset of Non-Eye (head, hand, and scene) and Eye (3D gaze vectors) data …

Invisibleeye: Mobile eye tracking using multiple low-resolution cameras and learning-based gaze estimation

M Tonsen, J Steil, Y Sugano, A Bulling - Proceedings of the ACM on …, 2017 - dl.acm.org
Analysis of everyday human gaze behaviour has significant potential for ubiquitous
computing, as evidenced by a large body of work in gaze-based human-computer …

Towards end-to-end video-based eye-tracking

S Park, E Aksan, X Zhang, O Hilliges - … , Glasgow, UK, August 23–28, 2020 …, 2020 - Springer
Estimating eye-gaze from images alone is a challenging task, in large parts due to un-
observable person-specific factors. Achieving high accuracy typically requires labeled data …

What/where to look next? Modeling top-down visual attention in complex interactive environments

A Borji, DN Sihite, L Itti - IEEE Transactions on Systems, Man …, 2013 - ieeexplore.ieee.org
Several visual attention models have been proposed for describing eye movements over
simple stimuli and tasks such as free viewing or visual search. Yet, to date, there exists no …

Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data

O Deane, E Toth, SH Yeo - Behavior Research Methods, 2023 - Springer
With continued advancements in portable eye-tracker technology liberating experimenters
from the restraints of artificial laboratory designs, research can now collect gaze data from …

Learning to predict sequences of human visual fixations

M Jiang, X Boix, G Roig, J Xu… - IEEE transactions on …, 2016 - ieeexplore.ieee.org
Most state-of-the-art visual attention models estimate the probability distribution of fixating
the eyes in a location of the image, the so-called saliency maps. Yet, these models do not …

Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities

R Kothari, Z Yang, C Kanan, R Bailey, JB Pelz… - Scientific reports, 2020 - nature.com
The study of gaze behavior has primarily been constrained to controlled environments in
which the head is fixed. Consequently, little effort has been invested in the development of …