Research
What is visual search?
Loosely following William James, we can assert that everyone knows what visual search tasks are because every one does them all the time. Visual search tasks are those where one looks for something. Real world examples include search for tumors or other critical information in X-rays, search for the right piece of a jigsaw puzzle, or search for the correct key on the keyboard when you are still in the "hunt and peck" stage of typing.
In the lab, a visual search task might look something like Figure One. If you fixate on the * in Figure One, you will probably find an "X" immediately. It seems to "pop-out" of the display. However, if you are asked to find the letter "T", you may not see it until some sort of additional processing is performed. Assuming that you maintained fixation, the retinal image did not change. Your attention to the "T" changed your ability to identify it as a "T".
Figure 1. Fixating on the "*", find the X and T
Processing all items at once ("in parallel") provides enough information to allow us to differentiate an "X" from an "L". However, the need for some sort of covert deployment of attention in series from letter to letter in the search for the "T" indicates that we cannot fully process all of the visual stimuli in our field of view at one time (e.g. Tsotsos, 1990). Similar limitations appear in many places in cognitive processing.
It is important to distinguish covert deployment of attention from movements of the eyes. If you fixate on the * in Figure Two, you will find that, not only doesn't the "T" pop out, it cannot be identified until it is foveated. It is hidden from the viewer by the limitations of peripheral visual processing.
Figure 2. Find the T
You can identify the stimuli in Figure 1 while fixating the central "*". This is not to say that you did not move your eyes - only that you did not need to move your eyes. For most of the experiments discussed below, eye movements were uncontrolled. While interesting, eye movements are probably not the determining factor in visual searches of the sort discussed in this reivew - those with relatively large items spaced fairly widely to limit peripheral crowding effects (Levi, Klein & Aitsebaomo, 1985). For instance, when Klein & Farrell (1989) and Zelinsky (1993), using stimuli of this sort, had participants perform the search tasks with and without overt eye movements, they obtained the same pattern of RT data regardless of the presence or absence of eye movements. The eye movements were not random. They simply did not constrain the RTs even though eye movements and attentional deployments are intimately related (Hoffman & Subramaniam, 1995; Khurana & Kowler, 1987; Kowler, Anderson, Dosher & Blaser, 1995)
Excerpt from Wolfe, J. M. (1996). Visual search. In H. Pashler (Ed.), Attention. London, UK: University College London Press.
Preattentive Vision
A preattentive feature is typically defined as one that guides visual attention and cannot be decomposed into simpler preattentive features. Preattentive processing is the processing that occurs before selective attention reaches an object or location. Preattentive features are generally assumed to be processed in parallel across the visual field. (e.g. give me all the red stuff). This can be seen as distinct from other forms of guidance that might be more spatially localized (e.g. location cuing or scene guidance to, say, the ground plane).
Wolfe, J. M. (2005). Guidance of Visual Search by Preattentive Information. In L. Itti & G. Rees & J. Tsotsos (Eds.), Neurobiology of attention (pp. 101-104). San Diego, CA: Academic Press / Elsevier.
Wolfe, J.M. & Utochkin, I.S. (2019). What is a preattentive feature? Current Opinion in Psychology.
Post-Attentive Vision
This type of vision seeks to understand what happens to the visual representation of a previously attended object when attention is deployed elsewhere. We have found that when attention is has been deployed to an item but is now deployed elsewhere, the postattentive representation is indistinguishable from the preattentive state of the object. This also leads us to claim that visual perception of the world is not a cumulative construction. Your understanding of what you are looking at may develop over time but and that attention only alters the perception of an object while it is directed to the object.
Wolfe, J.M., Klempen, N., & Dahlen, K. (2000) Post-attentive Vision. The Journal of Experimental Psychology: Human Perception and Performance, 26(2): 693-716
Attentional Guidance
When we look for something, we typically deploy attention from item to item until we find the target of our search or we give up. These deployments of attention are not random; rather, they are “guided”. This idea is at the heart of our Guided Search modle (now in version 6.0). There are a variety of forces that contribute to the guidance of attention. Guided Search 6.0 speaks of 5 forms of guidance though one could group forms of guidance differently. These are:
Bottom-up stimulus-driven salience - The more salient an object is, the more likely it is to attract attention.
Top-down, user-driven guidance - Your attention is attracted to items thast share preattentive features with your target. Thus, if you are looking for golf balls, any items that are white and/or round will attract attention
History - If you found it before, it is more likely to attract attention now.
Value - If you were reward for attending to a specific feautre, it will be more likely to attract attention, even if the feature is now irrelevant.
Scene guidance - The structure of the 3D world constrains where many targets can be. Whatever else your cat is doing, it is probably not on the ceiling.
Wolfe, J.M. (2021). Guided Search 6.0: An updated model of visual search. Psychon Bull Rev 28, 1060–1092. https://doi.org/10.3758/s13423-020-01859-9.
Wolfe, J. M., & Horowitz, T. S. (2017). Five factors that guide attention in visual search. Nature Human Behaviour, 1, 0058. doi:https://doi.org/10.1038/s41562-017-0058
Look But Fail To See
Look But Fail To See (LBFTS) errors are errors when sufficiently expert observers miss a target that is clearly visible in the current field of view. Missing a typo on this page might be an example. So is missing a visible tumor in a mammogram, assuming that you have the expertise to identify such a target. These errors are a by-product of normal processes of visual attention. Of course, LBFTS errors can have serious negative consequences so we are interested in understanding them and, possibly, reducing their frequency, especially in socially important search tasks.
Wolfe, J. M., Kosovicheva, A., Wolfe, B. (2022). Normal blindness: when we Look But Fail To See. Trends in Cognitive Sciences, 26(9), 809-819. https://doi.org/10.1016/j.tics.2022.06.006.
Foraging
Foraging tasks are search tasks where searchers are collecting multiple instances of the target (or targets). People and animals alike engage in foraging tasks regularly. Picking berries of a bush is a good example of a foraging task. In studying these tasks, we are often less interested in how the berry gets selected and more interested in the question of when the forager decides it is time to move to the next berry bush. Charnov’s “Marginal Value Theorem” has provided a framework for much of our research on this topic.
Wolfe, J. M. (2013). When is it time to move to the next rasberry bush? Foraging rules in human visual search. Journal of Vision, 13(3), 1-17.
Real-World Applications
The work done in this lab may be experimental, but it has real applications. People search for things every day, and multiple times a day, such as their keys, or their kid’s favorite stuffed animals. These tasks can also appear in higher-risk environments such as work in medical imaging to understand how to help radiologists save time and increase accuracy, as well understanding how even trained airport checkpoint screeners miss rare targets. Because of this, visual search errors could have life-altering real world consequences. These are the types of errors we hope to understand and take actions towards minimizing.
For example:
Evans, K. K., Birdwell, R. L., & Wolfe, J. M. (2013). If You Don’t Find It Often, You Often Don’t Find It: Why Some Cancers Are Missed in Breast Cancer Screening. . PLoS ONE 8(5): e64366. , 8(5), e64366. doi:doi:10.1371/journal.pone.0064366.
Wolfe, J.M., Brunelli, D.N., Rubinstein, J., & Horowitz, T.S. (2013). Prevalence effects in newly trained aiport checkpoint screeners: Trained observers miss rare targets, too. Journal of Vision, 13(3).
Wolfe, J. M., Lyu, W., Dong, J., & Wu, C.-C. (2022). What eye tracking can tell us about how radiologists use automated breast ultrasound. J Med Imaging (Bellingham), 9(4), 045502 doi:10.1117/1.JMI.9.4.045502
More...
Obviously, this page is giving only brief pointers to some of the topics that we study. For more of an introduction, here are a few reasonably recent review articles.
Wolfe, J. M. (2014). Approaches to Visual Search: Feature Integration Theory and Guided Search. In A. C. Nobre & S. Kastner (Eds.), Oxford Handbook of Attention (pp. 11-55). New York: Oxford U Press.
Wolfe, J. M. (2018). Visual Search. In J. Wixted) (Ed.), Stevens’ Handbook of Experimental Psychology and Cognitive Neuroscience (Vol. II. Sensation, Perception & Attention: John Serences (UCSD), pp. 569-623): Wiley.
Wolfe, J. M., & Horowitz, T. S. (2017). Five factors that guide attention in visual search. Nature Human Behaviour, 1, 0058. doi:https://doi.org/10.1038/s41562-017-0058
Wolfe, J. M., Evans, K. K., & Drew, T. (2018). The first moments of medical image perception. In E. Samei & E. A. Krupinski (Eds.), The Handbook of Medical Image Perception and Techniques (2 ed., pp. 188-196). Cambridge: Cambridge University Press.
Wolfe, J. M., & Utochkin, I. S. (2019). What is a preattentive feature? Current Opinion in Psychology, 29, 19-26. doi:https://doi.org/10.1016/j.copsyc.2018.11.005
Wolfe, J. M. (2020). Visual Search: How do we find what we are looking for? Annual Review of Vision Science, 6, 539-562. doi:https://doi.org/10.1146/annurev-vision-091718-015048
Wolfe, J. M. (2023). Visual Search. In O. Braddick (Ed.), Oxford Research Encyclopedia of Psychology. New York: Oxford U.