Tech and Neuroscience Achieve Groundbreaking Milestone: AI Aids Stroke Patient in Regaining Speech

At Ann Johnson’s wedding reception two decades ago, her exceptional oratory skills were on full display. In a vibrant and spirited 15-minute toast, she shared humorous anecdotes and acknowledged her enthusiasm for having the spotlight. However, just two years later, tragedy struck when Mrs. Johnson, then a 30-year-old teacher, suffered a catastrophic stroke that left her completely paralyzed and unable to speak.

In a groundbreaking development, scientists have now made significant progress towards helping patients like Mrs. Johnson regain their ability to speak. Through a remarkable fusion of neuroscience and artificial intelligence, implanted electrodes were able to decode the brain signals of Mrs. Johnson as she silently attempted to form sentences. This technological breakthrough converted her brain signals into written and vocalized language, allowing an avatar on a computer screen to speak the words while displaying various facial expressions.

The research, published in the esteemed scientific journal Nature, marks the first time that spoken words and facial expressions have been directly synthesized from brain signals. Mrs. Johnson played an active role in the development of the avatar’s voice, choosing a face that resembled hers and using her wedding toast as the basis for its development.

Dr. Edward Chang, the team’s leader and chairman of neurological surgery at the University of California, San Francisco, expressed their mission of restoring the essence of individuals. Mrs. Johnson, who is now 48 years old, expressed her gratitude and the profound impact the technology has had on her sense of self.

The ultimate goal of this groundbreaking research is to provide assistance to individuals who have lost their ability to speak due to conditions such as strokes, cerebral palsy, and amyotrophic lateral sclerosis. Currently, Mrs. Johnson’s implant requires a cable connection to a computer, but efforts are underway to develop wireless versions. Researchers envision a future where individuals who have lost their ability to speak can engage in real-time conversations through computerized avatars that accurately convey tone, emotions, and facial expressions.

Dr. Parag Patil, a neurosurgeon and biomedical engineer at the University of Michigan, who reviewed the study before publication, praised the investigators for obtaining valuable information about various communication elements directly from the surface of the brain.

Mrs. Johnson’s journey exemplifies the rapid progress in this field. Just two years ago, the same team successfully used a simpler implant and algorithm to enable a paralyzed man nicknamed “Pancho” to produce basic words displayed as text on a computer. Mrs. Johnson’s implant, with nearly double the number of electrodes, heightened its ability to detect brain signals associated with speech-related processes involving the mouth, lips, jaw, tongue, and larynx. The researchers trained the artificial intelligence to recognize phonemes, or sound units, which can form any word.

David Moses, the project manager, compared these sound units to an “alphabet of speech sounds.” While Pancho’s system allowed for the production of 15 to 18 words per minute, Mrs. Johnson’s implant achieved an impressive rate of 78 words using a much larger vocabulary. Typical conversational speech consists of about 160 words per minute.

Initially, the team did not anticipate the feasibility of utilizing an avatar or synthesizing audio. However, the promising results served as an encouraging sign to tackle the more challenging aspects of the research. They developed an algorithm to convert brain activity into audio waveforms, ultimately producing synthesized speech. Working in collaboration with a company specializing in facial animation, the researchers equipped the avatar with data on muscle movements, allowing Mrs. Johnson to convey facial expressions and make various mouth movements through her decoded brain signals.

Through the avatar, Mrs. Johnson was able to express sentiments such as “I think you are wonderful” and “What do you think of my artificial voice?” She described the emotional impact of hearing a voice similar to her own and engaged in conversations with her husband, William. Mrs. Johnson’s husband expressed his unwavering support for her, stating that it was essential for him to be there for her, just as she had always been there for him.

The field is advancing at a rapid pace, with experts predicting the availability of federally approved wireless versions within the next ten years. Different approaches may be optimal for specific patients. Nature also published another study involving electrodes implanted deeper in the brain, detecting activity at the individual neuron level. This method offers greater precision but is potentially less stable due to the shifting firing patterns of specific neurons.

Both studies utilized predictive language models to aid in guessing words in sentences. These systems are constantly improving their ability to recognize participants’ neural activity and comprehend new language patterns. However, neither approach is entirely accurate, with a quarter of individual words being incorrectly decoded when using large vocabulary sets. Nevertheless, the avatar’s facial expressions were mostly correctly interpreted by people on a crowdsourcing platform.

Experts emphasize that these systems are not capable of reading minds or thoughts. Rather, they rely on interpreting brain signals, similar to how baseball batters interpret the pitcher’s movements to predict pitches. The possibility of mind reading raises ethical and privacy concerns that need to be addressed in the future.

Mrs. Johnson first reached out to Dr. Chang in 2021, the day after her husband shared an article about Pancho, the paralyzed man who had been assisted by the researchers. Despite living far from their lab in San Francisco, Mrs. Johnson’s persistence and determination led them to overcome the geographical barrier. Her husband, in turn, adjusted his work schedule to support her participation in the study.

Mrs. Johnson’s communication with me was facilitated through a more rudimentary assistive system she uses at home. Although slow, allowing her to generate only 14 words per minute, it is faster than her alternative method of using a plastic letter board. Her participation in this multi-year study, which is funded through online fundraising efforts and support from their community, entails traveling to California for weeks at a time while their home is equipped with the necessary technology for her experiments.

As Mr. Johnson succinctly put it, “If she could have done it for 10 hours a day, seven days a week, she would have.” Mrs. Johnson’s determination and perseverance have always been integral to her character, and she continues to defy the limitations imposed by her condition.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment