Tracking statistical regularities to form more efficient memory representations

Tim Brady

Computational Visual Cognition Lab, Brain & Cognitive Science, MIT

A central task of the visual system is to take the information available in the retinal image and compress it to form a moreefficient representation of the world. Such compression requires sensitivity to the statistical distribution that stimuli are drawn from, in order to detect redundancy and eliminate it.

In the first part of the talk I will discuss work on statistical learning mechanisms (e.g., Saffran et al. 1996) that suggests how people might track such distributions of stimuli in the real world. I'll present several experiments that use sequences of natural images to demonstrate that such statistical learning mechanisms operate at multiple levels of abstraction, including the level of semantic categories. I'll discuss how learning at this abstract level allows us to minimize redundancy by not relearning the same regularities over and over again.

In the second part of the talk I will suggest another potential benefit to such statistical learning mechanisms – the ability to remember more items in visual short-term memory. I'll present several experiments where we show that observers can take advantage of relationships between colors in VSTM displays, eliminating redundant information to form more efficient representations of the displays. I'll then present a model of this data using Huffman coding, a compression algorithm, to demonstrate that quantifying VSTM in terms of the bits of information remembered is more useful than measuring the number of objects remembered (the most common metric).