Summary

Speed processing in natural visual scenes.

From
to
team
Vision
Body

Abstract :

Measuring speed and direction of moving objects is an essential computational step in order to move our eyes, hands or other body parts with respect to the environment. Whereas encoding and decoding of direction information is now largely understood in various neuronal systems, how the human brain accurately represents speed information remains largely unknown. Speed tuned neurons have been identified in several early cortical visual areas in monkeys. However, how such speed tuning emerges is not yet understood. A working hypothesis is that speed tuned neurons nonlinearly combine motion information extracted at different spatial and temporal scales, taking advantage of the statistical spatiotemporal properties of natural scenes. However, such pooling of information must be context dependent, varying with the spatial perceptual organization of the visual scenes. Furthermore, the population code underlying perceived speed is not elucidated either and therefore we are still far from understanding how speed information is decoded to drive and control motor responses or perceptual judgments.

Recently, we have proposed that speed estimation is intrinsically a multi-scale, task-dependent problem (Simoncini et al., Nature Neuroscience 2012) and we have defined a new set of motion stimuli, constructed as random phase dynamical textures that mimic the statistics of natural scenes (Sanz-Leon et al., Journal of Neurophysiology 2012). This approach has proved to be fruitful to investigate nonlinear properties of motion integration.

The current proposal brings together psychophysicists, oculomotor scientists and modelers to investigate speed processing in human. We aim at expanding this framework in order to understand how tracking eye movements and motion perception can take advantage of multiple scale processing for estimating target speed. We will design sets of high dimensional stimuli by extending our generative model. Using these natural-statistics stimuli, we will investigate how speed information is encoded by computing motion energy across different spatial and temporal filters. By analysing both perceptual and oculomotor responses we will probe the nonlinear mechanisms underlying the integration of the outputs of multiple spatiotemporal filters and implement these processes in a refined version of our model. Furthermore, we will test our working hypothesis that in natural scenes such nonlinear integration provides precise and reliable motion estimates, which leads to efficient motion-based behaviors. By comparing tracking responses with perception, we will also test a second critical hypothesis, that nonlinear speed computations are task-dependent. In particular, we will explore the extent to which the geometrical structures of visual scenes are decisive for perception beyond the motion energy computation used for early sensorimotor transformation. Finally we will investigate the role of contextual and extra-retinal, predictive information in building an efficient dynamic estimate of objects' speed for perception and action.