Guided Search 6.0: An updated model of visual search

JM Wolfe - Psychonomic bulletin & review, 2021 - Springer
Abstract This paper describes Guided Search 6.0 (GS6), a revised model of visual search.
When we encounter a scene, we can see something everywhere. However, we cannot …

Emotion, motivation, decision-making, the orbitofrontal cortex, anterior cingulate cortex, and the amygdala

ET Rolls - Brain Structure and Function, 2023 - Springer
The orbitofrontal cortex and amygdala are involved in emotion and in motivation, but the
relationship between these functions performed by these brain structures is not clear. To …

UC-Net: Uncertainty inspired RGB-D saliency detection via conditional variational autoencoders

J Zhang, DP Fan, Y Dai, S Anwar… - Proceedings of the …, 2020 - openaccess.thecvf.com
In this paper, we propose the first framework (UCNet) to employ uncertainty for RGB-D
saliency detection by learning from the data labeling process. Existing RGB-D saliency …

What to expect where and when: How statistical learning drives visual selection

J Theeuwes, L Bogaerts, D van Moorselaar - Trends in cognitive sciences, 2022 - cell.com
While the visual environment contains massive amounts of information, we should not and
cannot pay attention to all events. Instead, we need to direct attention to those events that …

Behavioral inattention

X Gabaix - Handbook of behavioral economics: Applications and …, 2019 - Elsevier
Inattention is a central, unifying theme for much of behavioral economics. It permeates such
disparate fields as microeconomics, macroeconomics, finance, public economics, and …

A rhythmic theory of attention

IC Fiebelkorn, S Kastner - Trends in cognitive sciences, 2019 - cell.com
Recent evidence has demonstrated that environmental sampling is a fundamentally
rhythmic process. Both perceptual sensitivity during covert spatial attention and the …

Five factors that guide attention in visual search

JM Wolfe, TS Horowitz - Nature human behaviour, 2017 - nature.com
How do we find what we are looking for? Even when the desired target is in the current field
of view, we need to search because fundamental limits on visual processing make it …

Evaluating the visualization of what a deep neural network has learned

W Samek, A Binder, G Montavon… - IEEE transactions on …, 2016 - ieeexplore.ieee.org
Deep neural networks (DNNs) have demonstrated impressive performance in complex
machine learning tasks such as image classification or speech recognition. However, due to …

Social eye gaze in human-robot interaction: a review

H Admoni, B Scassellati - Journal of Human-Robot Interaction, 2017 - dl.acm.org
This article reviews the state of the art in social eye gaze for human-robot interaction (HRI). It
establishes three categories of gaze research in HRI, defined by differences in goals and …

WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians

J Bernal, FJ Sánchez, G Fernández-Esparrach… - … medical imaging and …, 2015 - Elsevier
We introduce in this paper a novel polyp localization method for colonoscopy videos. Our
method is based on a model of appearance for polyps which defines polyp boundaries in …