In big breakthrough, AI converts brain waves to speech, helps paralysed woman speak again
A remarkable breakthrough, courtesy of AI, was published in a study where a woman who suffered paralysis as a result of a cataclysmic stroke, was able to speak again through a digital avatar.
When generative artificial intelligence (gen AI) first became mainstream, many industries were wondering how this transformative technology could create new breakthroughs and help humanity. Recently, we saw one of the most impactful examples of it. A recently published study has revealed that a woman who suffered a stroke and became paralyzed was helped by AI to get a semblance of normalcy back in her life. The woman lost her ability to speak, but researchers were able to connect sensors and wires to her head that read her brain waves and converted them into speech to let her speak again.
AI converts brain waves to speech
The study was published in the Nature journal and highlights that brain-computer interfaces (BCIs) were used to enable the patient to speak again. BCI is a system where the brain is connected to a machine with a monitor and the signals from the brain are digitized and displayed on the screen in audio-visual format after undergoing complex conversion algorithms.
The lead researcher, Dr. Edward Chang, the head of neurological surgery at UCSF, implanted an array of 253 pin-shaped implants into the patient's skull in the same location as her speech center. After that, the implants captured electrical signals that were triggered as the woman tried to speak and answer some basic questions asked by one of the researchers. Every movement of the jaw, the tongue, and the lips were captured by the machine.
This data was then fed into an AI tool that had spent a few weeks making sense of the data. After the training period, the study claims the AI became proficient in recognizing more than 1000 words based on the unique brain signal patterns. Now, with the machine ready, the system enabled the woman to simply mimic speech and the AI would display what she would have wanted to say on the monitor. The research team also created an AI-powered synthesized voice for her which was trained on recordings of her voice prior to the stroke.
“What's quite exciting is that just from the surface of the brain, the investigators were able to get out pretty good information about these different features of communication,” Dr. Parag Patil, a neurosurgeon who reviewed the study told the New York Times.
The scope of the technology
BCIs have been in existence since the 1970s and we have known how to capture these brain signals that show unique patterns for the minutest actions. However, we never had a reliable way to reproduce these signals or to analyze them in real-time to turn it into an impactful device.
This study highlights that with generative AI, it is possible to not only understand and analyze brain waves in real-time but also to reproduce them by adding another system to witness medical miracles.
The question is just how far can we go with this technology? Can this also work on people who were born without the ability to speak or build a system where audiovisual cues can be sent back to the brain even if the person does not have the ability to see or hear? Can it read the brain waves of people in coma and help medical professionals understand what's going on in their heads? Questions are many, but this technology has definitely shown that in the future, with further AI advancement, all of it might be possible.