Visual Attention Lab

Publications | Presentations |Posters

  • Click on the titles to view and download the posters.

VSS 2023

  • "The FORAGEKID Game: Using Hybrid Foraging to Study Executive Functions and Search Strategies During Development"

  • Beatriz Gil-Gómez de Liaño1 (bgil.gomezdelianno@uam.es), Jeremy M. Wolfe2 ; 1Universidad Autónoma de Madrid, 2BWH-Harvard Medical School
  • Searching for friends in the park, finding specific Lego blocks for a building project, or looking for recipe ingredients in the fridge; Each of these is a "hybrid search" typical of everyday life. Hybrid search is searching for instances of multiple targets held in memory. Hybrid Foraging (HF) is a continuous version in which observers search for multiple exemplars of those multiple target types. HF draws on a wide array of cognitive functions beyond those studied in classic search and can be used as a "one-stop shop" to study those functions within a single task as they develop and interact over the lifespan. We study cognitive development using our FORAGEKID-HF video game. Observers search through diverse moving real-world toys or simpler colored squares and circles. They are asked to collect targets from a memorized set as quickly as possible while not clicking on distractors. We have tested large samples of children, adolescents, and young adults (4-25 years old) running different versions of FORAGEKID. Foraging rate data can be used to assess the development of selective attention under different memory target-load conditions (here, 2 versus 7 targets). Cognitive flexibility and search strategies can be measured by analyzing switch costs when observers change from collecting one target type to collecting another. The organization of search can be studied by examining target-search paths using different measures; e.g., best-r, inter-target distances, etc. Finally, decision-making processes are illustrated by quitting rules; When do observers choose to move from one screen to a fresh screen? Changes in "travel-costs" (time to move from one screen to the next) impact quitting rules differentially across the lifespan. Here, we show data supporting FORAGEKID as a serious but enjoyable game that can effectively assess and potentially train a range of attentional and executive functions over the lifespan.
  • Acknowledgements: European Union’s Horizon 2020, Marie Sklodowska-Curie Action FORAGEKID 793268 & Ministerio de Ciencia e Innovación de España: PID2021-122621OB-I00.
  • "Research on re-search: Foraging in the same patch twice"

  • Injae Hong1 (ihong1@bwh.harvard.edu), Jeremy M. Wolfe1, 2 ; 1Brigham and Women's Hospital, 2Harvard Medical School
  • When humans forage for multiple targets in the succession of ‘patches,’ the optimal strategy is to leave the patch when the instantaneous rate of return falls below the average rate of return (Marginal Value Theory: Charnov, 1976). Human behavior has been shown to be, on average, near optimal in basic foraging tasks. Suppose, however, that foragers are allowed to return to previously foraged patches. What strategy would the foragers take when they revisit patches that they left previously, either compulsorily or voluntarily? Our computer-screen patches contained “ripe” and “unripe” berries, each defined by overlapping color distributions (d’ = 2.5). Observers attempted to collect ripe berries as fast as possible. One group of observers was forced to leave each patch after 10 seconds and then brought back to forage those patches for an additional 5 minutes. A second group foraged and moved to new patches when they wished to, before being brought back to pick the “leftovers” for 5 minutes. A control group foraged at will with no revisiting. The observers who were forced to leave the patches behaved like control observers, continuing where they left off when brought back to the patch and ending at about the same rate. Observers who had already voluntarily left a patch did continue to pick when brought back to the patch. However, the patches having been depleted, that picking was less productive. There appeared to be a small jump in the rate of foraging when these observers returned to their patches. It would be interesting to see if those observers would have bothered to pick on the second visit if they were not required to do so.
  • Acknowledgements: NSF 2146617
  • "How Blind is Inattentional Blindness in Mixed Hybrid search?"

  • Ava Mitra1 (amitra@bwh.harvard.edu), Jeremy M. Wolfe1, 2 ; 1Brigham and Women's Hospital, 2Harvard Medical School
  • In day-to-day visual search tasks, we may search for instances of multiple types of targets (e.g., searching for specific road signs while also scanning for pedestrians, animals, and traffic cones). In the lab, the “mixed hybrid search task” is a model system developed to study such tasks, where you are looking for general categories of items (e.g., things you don’t want to hit with your car) alongside specific items (e.g., the sign for your exit). Previous hybrid visual search studies have shown that observers are much more likely to miss more general “categorical” targets than specific targets, even though it is quite clear that categorical and specific items are equally likely to be attended in this paradigm. If an item is attended but missed, do observers have any access to the information that may have been accumulating about that target? Twelve participants searched arrays for two specific items (e.g., this shoe and this table) while also searching for unambiguous instances of two categorical target types (e.g., ANY animal and ANY car). In order to look for the existence of sub-threshold information about missed targets, we borrowed methods from the inattentional blindness literature. We asked two, 2AFC questions after every miss error and after 5% of target-absent trials. Question 1: Do you think you missed an item? Question 2: If you did miss something, which of these two items was it? On trials where participants asserted that they had NOT missed an item (“No” to question one), participants correctly selected the right item ~63% of the time against a 50% chance level (p<0.018). Interestingly, this ability to identify the missed target was only seen following missed categorical targets, not missed specific targets. Knowledge about the target’s identity can lurk behind after that target is missed.
  • Acknowledgements: NSF grant 2146617, NIH-NEI grant EY017001, NIH-NCI grant CA207490
  • "Image memorability modulates image recognition, but not image localization in space and time"

  • Nathan Trinkl1 (ntrinkl@bwh.harvard.edu), Jeremy M. Wolfe1, 2 ; 1Brigham and Women's Hospital, 2Harvard Medical School
  • We know that observers can typically discriminate old images from new ones with over 80% accuracy even if after seeing hundreds of objects for just 2-3 seconds each (“Massive Memory”). What do they know about WHERE and WHEN they saw each object? From previous work, we know that observers can remember the locations of 50-100 out of 300 items (Spatial Massive Memory – SMM). In a different study, observers could mark temporal locations within 10% of the actual time of the item's original appearance (Temporal Massive Memory - TMM). Are SMM and TMM related? In new experiments, 64 observers saw 50 items, each sequentially presented in random locations in a 7x7 grid. They subsequently saw 100 items (50 old). Four sets of instructions were used: (1) Mere Identity instruction asked 16 observers just to remember the items. (2) Spatial instruction asked 16 observers to also remember item locations. (3) Temporal instruction asked 14 observers to remember when items appeared. (4) Full instruction (13 observers) combined Spatial and Temporal instructions. At test, observers in all conditions were told to click on the original location of old items and to indicate when they saw it on a time bar. ~12% of observers appeared to guess on the spatial task and ~50%(!) guessed on the timing task. Interestingly, just 6% guessed on both, exactly as would be predicted if the choice to guess was independent for space and time. Overall, space and time scores were strongly correlated for Full Instructions (r-sq=.64, p=0.001), Temporal (r-sq=.31, p=0.04), and marginally correlated for Spatial (r-sq=.20, p=0.08). The Mere Identity correlation was insignificant (r-sq=.03, p=0.40). Effects of instruction on performance were generally insignificant. Observers can have quite good memory for when and where they saw an object. Those memories seem to be modestly correlated with each other.
  • Acknowledgements: National Science Foundation (NSF), Grant #1848783
  • "Modestly related memories for when and where an object was seen in a Massive Memory paradigm."

  • Jeremy Wolfe1, 2 (jwolfe@bwh.harvard.edu), Claire Wang3, Nathan Trinkl1, Wanyi Lyu4 ; 1Brigham and Womens Hospital, 2Harvard Medical School, 3Phillips Academy, Andover, MA, 4York U, Toronto
  • We know that observers can typically discriminate old images from new ones with over 80% accuracy even if after seeing hundreds of objects for just 2-3 seconds each (“Massive Memory”). What do they know about WHERE and WHEN they saw each object? From previous work, we know that observers can remember the locations of 50-100 out of 300 items (Spatial Massive Memory – SMM). In a different study, observers could mark temporal locations within 10% of the actual time of the item's original appearance (Temporal Massive Memory - TMM). Are SMM and TMM related? In new experiments, 64 observers saw 50 items, each sequentially presented in random locations in a 7x7 grid. They subsequently saw 100 items (50 old). Four sets of instructions were used: (1) Mere Identity instruction asked 16 observers just to remember the items. (2) Spatial instruction asked 16 observers to also remember item locations. (3) Temporal instruction asked 14 observers to remember when items appeared. (4) Full instruction (13 observers) combined Spatial and Temporal instructions. At test, observers in all conditions were told to click on the original location of old items and to indicate when they saw it on a time bar. ~12% of observers appeared to guess on the spatial task and ~50%(!) guessed on the timing task. Interestingly, just 6% guessed on both, exactly as would be predicted if the choice to guess was independent for space and time. Overall, space and time scores were strongly correlated for Full Instructions (r-sq=.64, p=0.001), Temporal (r-sq=.31, p=0.04), and marginally correlated for Spatial (r-sq=.20, p=0.08). The Mere Identity correlation was insignificant (r-sq=.03, p=0.40). Effects of instruction on performance were generally insignificant. Observers can have quite good memory for when and where they saw an object. Those memories seem to be modestly correlated with each other.
  • Acknowledgements: This work was supported by NSF grant 1848783 to JMW