Picture
Profile picture for user ponsot

Emmanuel Ponsot

Laboratoire des Systèmes Perceptifs

Post-doc
Poste
Chercheur Post-doctorant

29 rue d'Ulm

75005 Paris France

 

Laboratory
LSP
Team
Vision
Selected publications
International Journal article  

Ponsot, E., Burred, J., Belin, P. & Aucouturier, J. (2018). Cracking the social code of speech prosody using reverse correlation. Proceedings of the National Academy of Sciences of the United States of America, 115(15), 3972-3977. doi:10.1073/pnas.1716090115

International Journal article  

Ponsot, E., Arias, P. & Aucouturier, J. (2018). Uncovering mental representations of smiled speech using reverse correlation. The Journal of the Acoustical Society of America, 143(1), EL19. doi:10.1121/1.5020989

International Journal article  

Ponsot, E., Susini, P. & Meunier, S. (2017). Global loudness of rising- and falling-intensity tones: How temporal profile characteristics shape overall judgments. The Journal of the Acoustical Society of America, 142(1), 256. doi:10.1121/1.4991901

International Journal article  

Deneux, T., Kempf, A., Daret, A., Ponsot, E. & Bathellier, B. (2016). Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities. Nature communications, 7, 12682. doi:10.1038/ncomms12682

International Journal article  

Ponsot, E., Susini, P. & Meunier, S. (2015). A robust asymmetry in loudness between rising- and falling-intensity tones. Attention, perception & psychophysics, 77(3), 907-20. doi:10.3758/s13414-014-0824-y

RESEARCH INTERESTS

I am interested in understanding how the human brain processes complex sensory signals such as speech and visual scenes, at both sensory and cognitive levels. Based on an interdisciplinary approach combining signal-processing techniques, neurophysiology, psychophysics and computational modelling, my goal is to provide a clear mechanistic account of various perceptual processes. This research is conducted in the lab with healthy individuals and in clinical contexts to better understand how different disorders (e.g. sensorineural hearing loss, stroke) impact the different stages of sensory processing.

 

Example #1: How do we extract speech features from noise?

As we age, almost all of us will complain about increased difficulty to communicate in noisy environments. An important issue is why some individuals, even without measurable loss in audibility, experience more difficulties than others. Speech in noise understanding relies on the ability of our auditory system to extract relevant spectro-temporal modulations from noise, but the mechanisms underlying this processing remain poorly characterized. 

speech
An illustration of the task we developed to probe the "perceptual filters" humans use to extract spectro-temporal modulation patterns from noise. Our results show how hearing loss affects this processing. 

Inspired from visual studies, we have developed a psychophysical procedure to estimate spectrotemporal modulation filtering behaviorally using a reverse correlation approach. We measure listeners' perceptual filters task for detecting a specific target, i.e. how they filter out other modulations in the stimulus. Our results show on average that, as compared to normal-hearing listeners (left), hearing-impaired listeners (right) exhibit an overall reduced amplitude to extract the target. A closer look at individual patterns show various behaviors, that were not predictable from audiometric profiles. Using physiologically-plausible auditory models, we seek to determine where and how these differences emerge along the auditory pathway. Overall, this approach should help us to disentangle the various sources of supra-threshold auditory impairment and determine their role for speech-in-noise perception.

 

 

 

 

Example #2: How do we infer social and emotional meaning from speech prosody?

Beyond words, speech carries a lot of information about a speaker through its prosodic structure. Humans have developed a remarkable ability to infer others’ states and attitudes from the temporal dynamics of the different dimensions of speech prosody (i.e. pitch, intensity, timbre, rhythm). However, we still do not have a computational understanding of how high-level social or emotional impressions are built from these low-level dimensions. We recently developed a data-driven approach combining voice-processing techniques (using a specifically-designed audio software) and psychophysical reverse-correlation methods to expose the mental representations or ‘prototypes’ that underlie such inferences in speech.

Combining voice-processing algorithms and psychophysical reverse-correlation to probe the exact mental representations in speech; here, the prototype of interrogative intonation in a single word.
An illustration of the technique we developed to derive the exact shape of the mental representation related to interrogation. It can be deployed similarly to characterize the unknown representations that drive our social / emotional judgments

In particular, we have investigated how intonation drives social traits in speech. We have been able to demonstrate the existence of robust and shared mental representations of trustworthiness and dominance of a speaker’s voice. This approach offers a principled way to reverse-engineer the algorithms the brain uses to make high-level inferences from the acoustical characteristics of others’ speech. It holds promise for future research seeking to understand why certain emotional or social judgments differ across cultures and why these inferences may be impaired in some neurological disorders. We are currently running experiments in a clinical context to characterize prosody processing deficits in patients after stroke. Our goal is to develop individualized rehabilitation strategies based on the understanding of how and where their processing is impaired.

 

SCIENCE COMMUNICATION

A video made by CNRS Images explaining the approach taken in our recent paper to uncover the social code of speech prosody.

Video CNRS Images