Universal Mentors Association

University of California BCI study enables paralyzed woman to ‘speak’ through a digital avatar | Engadget

[ad_1]

Dr. Mario did not prepare us for this. In a pioneering effort, researchers from UC San Francisco and UC Berkeley, in partnership with Edinburgh-based Speech Graphics, have devised a groundbreaking communications system that allows a woman, paralyzed by stroke, to speak freely through a digital avatar she controls with a brain-computer interface.

Brain-Computer Interfaces (BCIs) are devices that monitor the analog signals produced by your gray matter and convert them into the digital signals that computers understand — like a mixing soundboard’s DAC unit but what fits inside your skull. For this study, researchers led by Dr. Edward Chang, chair of neurological surgery at UCSF, first implanted a 253-pin electrode array into speech center of the patient’s brain. Those probes monitored and captured the electrical signals that would have otherwise driven the muscles in her jaw, lips and tongue, and instead, transmitted them through a cabled port in her skull to a bank of processors. That computing stack housed a machine learning AI which, over the course of a few week’s training, came to recognize the patient’s electrical signal patterns for more than 1,000 words.

But that’s only the first half of the trick. Through that AI interface, the patient is now able to write out her responses, much in the same way Synchron’s system works for folks suffering from locked-in syndrome. But she can also speak, in a sense, using a synthesized voice trained on recordings of her natural voice from before she was paralyzed — .

What’s more, the researchers teamed up with Speech Graphics, the same company that developed the photorealistic facial animation technology from Halo Infinite and The Last of Us Part II, to create the patient’s avatar. SG’s tech “reverse engineers” the necessary musculoskeletal movements a face would make based on analysis of the audio input, then feeds that data in real-time to a game engine to be animated into a lagless avatar. And since the mental signals from the patient were mapped directly to the avatar, she could express emotion and communicate nonverbally as well.

“Creating a digital avatar that can speak, emote and articulate in real-time, connected directly to the subject’s brain, shows the potential for AI-driven faces well beyond video games,” Michael Berger, CTO and co-founder of Speech Graphics, said in a press statement Wednesday. “Restoring voice alone is impressive, but facial communication is so intrinsic to being human, and it restores a sense of embodiment and control to the patient who has lost that.“

BCI technology was pioneered in the early 1970s and has been slowly developing in the intervening decades. Exponential advancements with processing and computing systems have recently helped reinvigorate the field, with a handful of well-funded startups currently vying to be first through the FDA’s regulatory device approval process. Brooklyn-based Synchron made headlines last year when it was the first company to successfully implant a BCI in a human patient. Elon Musk’s Neuralink entered restricted FDA trials earlier this year after the company was found to have killed scores of porcine test subjects in earlier testing rounds.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *