You look at a photograph of a hiker in an alpine landscape. The hiker, her dog, the trees in the background, and the houses of a village in the distance are vividly reconstructed by your brain from the shades of light and color on the paper. The subjects of the picture are so distinct and clear that it is hard to see how difficult it was for your visual system to reconstruct it. A closer examination highlights some of the problems our brain has to overcome: The hiker in the foreground is higher and occupies a larger area than the houses and the trees in the background, yet you perceive her as being smaller. The dog is partially hidden by one of the legs of its master, yet you perceive it as a single animal, rather than two half-dogs. Theoretical models and psychological experiments led researchers to think that, in these examples and in countless other everyday situations, your brain makes use of an internal model of the world, built over a lifetime of experiences, to correctly interpret the image.
By analyzing the mathematical equivalents of the internal model, researchers in Jozsef Fiser’s lab in the Volen Center for Complex Systems at Brandeis and colleagues at Cambridge University (UK) deduce that, if the internal model works as hypothesized by theoreticians, traces of its functioning would be seen in neural activity recorded in complete darkness. Intuitively, an internal model would use its understanding of typical natural images to “fill-in” noisy and incomplete parts of a visual scene. As the brightness of an image is reduced, the visual system increasingly relies on prior expectations, until most of the neural activity is dominated by the internal model. This intuition is compatible with previous observations of neural data showing strong and coordinated activity in the visual system in the absence of visual stimulation, whose significance had remained unexplained.
In a paper published this week in Science, the authors analyzed neural activity in the primary visual cortex of ferrets watching natural scenes or artificial patterns, or just sitting in darkness. They found that, as predicted by the model, when the animal is in darkness the recorded patterns of neural activity closely resemble those recorded in response to natural visual scenes, but not those recorded in response to artificial stimuli. The fact that the similarity was specific to natural scenes indicates that the neural activity was due to the model’s expectations about the environment, and not to some other secondary effects. The authors repeated the measurements on animals at different stages of development, and found that the match of neural activity in the dark and in natural images was not present at birth, but rather gradually developed over the first four months of visual experience, as the internal model adapted to the statistics of the external world.
These results provide neural evidence for the internal models theorized by computational neuroscientists, and allow us to take a glance at the computations performed by the visual areas of the brain.