Picture
Profile picture for user De Cheveigne A

Alain de Cheveigne

Laboratoire des Systèmes Perceptifs

Chercheur
Poste
Directeur de recherche CNRS

29 rue d'Ulm

75005 Paris France

 

Laboratoire
LSP
Equipe
Audition
Bureau
2e étage, bureau 204
Tel
+33 (0)1 44 32 26 72
Publications sélectionnées
Article dans une revue internationale  

Goodman, D., Winter, I., Léger, A., de Cheveigné, A. & Lorenzi, C. (2018). Modelling firing regularity in the ventral cochlear nucleus: Mechanisms, and effects of stimulus level and synaptopathy. Hearing research, 358, 98-100. doi:10.1016/j.heares.2017.09.010

Article dans une revue internationale  

de Cheveigné, A. & Arzounian, D. (2018). Robust detrending, rereferencing, outlier detection, and inpainting for multichannel data. NeuroImage, In press. doi:10.1016/j.neuroimage.2018.01.035

Article dans une revue internationale  

de Cheveigné, A., Wong, D., Di Liberto, G., Hjortkjær, J., Slaney, M. & Lalor, E. (2018). Decoding the auditory brain with canonical component analysis. NeuroImage, 172, 206-216. doi:10.1016/j.neuroimage.2018.01.033

COCOHA project

LogoThe COCOHA project revolves around a need, an opportunity, and a challenge. Millions of people struggle to communicate in noisy environments particularly the elderly: 7% of the European population are classified as hearing impaired. Hearing aids can effectively deal with a simple loss in sensitivity, but they do not restore the ability of a healthy pair of young ears to pick out a weak voice among many, that is needed for effective social communication. That is the need. The opportunity is that decisive technological progress has been made in the area of acoustic scene analysis: arrays of microphones and beamforming algorithms, or distributed networks of handheld devices such as smart phones can be recruited to vastly improve the signal-to-noise ratio of weak sound sources. Some of these techniques have been around for a while, and are even integrated within commercially available hearing aids. However their uptake is limited for one very simple reason: there is no easy way to steer the device, no way to tell it to direct the processing to the one source among many that the user wishes to attend to. The COCOHA project proposes to use brain signals (EEG) to help steer the acoustic scene analysis hardware, in effect extending the efferent neural pathways that control all stages of processing from cortex down to the cochlea, to govern also the external device. To succeed we must overcome major technical hurdles, drawing on methods from acoustic signal processing and machine learning borrowed from the field of Brain Computer Interfaces. On the way we will probe interesting scientific problems related to attention, electrophysiological correlates of sensory input and brain state, the structure of sound and brain signals. This is the challenge.

Read more : https://cocoha.org