ENS, room Ribot, 29 rue d'Ulm, 75005 Paris
A bird chirping, a glass breaking, an ambulance passing by. Listening to sounds helps recognizing events and objects, even when they are out of sight, in the dark or behind a wall, for example. In this talk, I will discuss how the human brain transforms acoustic waveforms into meaningful representations of the sources, attempting to link theories, models and data from cognitive psychology, neuroscience and artificial intelligence research. I will then describe a neuroanatomically-grounded functional model of real-world sound recognition, as it emerges from analyses of behavioral and high-resolution functional MRI data informed by neuro-computational models of auditory processing.