Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Related Posts
Havana Syndrome Sparks Debate Among Experts
Audio-vestibular symptoms are gaining national attention and sparking debate amongst professionals and the public. A CBS 60 Minute report on Sunday, March 31, revisited Havana…
ADHD in Children in the United States
Audiologists sometimes see patients who also present with a diagnosis of attention-deficit/hyperactivity disorder (ADHD). Reuben and Elgaddal (2024) reported the percentage of children in the…
Remote Diagnostic Hearing Assessment in School-Entry-Aged Children
Even before the COVID-19 pandemic, telehealth and remote audiology visits were increasing in frequency and popularity. D’Onofrio and Zeng (2021) reported that telemedicine was adapted…