Watching you learn: What eye movements reveal about experience-based visual search

 

Barbara Hidalgo-Sotelo

Computational Visual Cognition Lab, Brain & Cognitive Science, MIT

When people search their every-day environment, personal experience can bias where you look and what you expect to see when you look there. How might prior experience in a specific, familiar environment (e.g. your office) influence your search, relative to an unfamiliar environment (e.g. a stranger's office)? In this talk, I describe an experiment that investigated the influence of contextual priors in guiding visual search by monitoring eye movements as participants search familiar real-world scenes. In the main manipulation, the expectation of target presence associated with the scene's identity is either: (1) strong, because scene identity is 100% predictive of target presence, (2) weak, because scene identity predicts a target location but the probability of target presence is 50%, or (3) null, because scene identity is never repeated and thus does not predict a location or a response. After repeated searches, reaction time performance improves in the groups with strong and weak identity priors relative to the control, and the greatest improvement is observed in the strong prior group. Eye movements are used to identify the locus of reaction time improvement. A faster exploration stage (scan time) is observed in both groups with contextual priors, while the strong prior group exclusively exhibits a faster gaze duration on the target (time spent fixating the target before response) over repeated searches. A potential role for experience-derived priors in visual search is briefly described. Finally, I show eye movement results that suggest a putative connection between the discriminability of a scene-embedded target and the duration of a eye fixation while searching.