Deployment of Feature-based Top-down Attention during Visual Search

 

Ueli Rutishauser,

Computation and Neural Systems, California Institute of Technology

urut@caltech.edu

 

Where the eyes fixate during search is not random; rather, gaze reflects an expectation of where the subject expects the target to be.  It is not clear, however, what information about a target is used to bias the underlying neuronal responses. We engaged subjects in a variety of simple visual search tasks while tracking their eye moments.  We derive a generative model that reproduces these eye movements and calculate the conditional probabilities that observers fixate, given the target, on or near an item in the display sharing a specific feature with the target. We use these probabilities to infer which features were biased by top-down attention: color seems to be the dominant stimulus dimension guiding search, followed by object size and, lastly, orientation. We use the number of fixations it took to find the target as a measure of task difficulty. We find that only a model that biases multiple feature dimensions in a hierarchical manner can account for the data. Contrary to common assumptions, memory plays almost no role in search performance. Our model can be fit to average data of multiple subjects or to individual subjects. Small variations of a few key parameters account well for the inter-subject differences. The model is compatible with neurophysiological findings of V4 and FEP neurons and predicts the gain modulation of these cells.