Time to Integrate: Initial Scene Processing and Eye Movement Control
Melissa Le-Hoa Võ
Brigham and Women's Hospital, Harvard Medical School
We have a great ability to extract the gist of a scene from only a brief glimpse. However, we
usually do not stop at the point of gist identification, but rather use the information extracted from the
first glimpse of a scene to plan subsequent actions such as eye movements. In a series of experiments
we investigated how initial scene processing affects subsequent eye movement control during object
search in naturalistic scenes. For this purpose, we used the flash-preview moving-window paradigm
(Castelhano & Henderson, 2007) in which a scene is briefly previewed prior to visual search for a
target object, while subsequent search takes place through a moving window that reveals only a small
area of the scene tied to the current fixation. This paradigm allows isolating the effect of the initial
scene glimpse on subsequent eye movements from the processing that takes place during later
stages of scene viewing.
The first set of experiments investigated the time course of initial scene processing by
manipulating scene presentation time as well as the time to integrate target and scene information
before the initiation of search. In the second set of experiments, we looked into what scene information
is needed to efficiently guide gaze during object search by providing participants with identical and
angular scene previews or merely scene category tags (e.g., "KITCHEN"). In this talk, I will give an
overview of the results and will discuss their implications in the light of the cognitive relevance
framework (Henderson et al., 2009).
(These experiments were performed during Melissa Vo's work at the University of Edinburgh)