Emerging speech neuroprostheses may offer a way to communicate for people who are unable to speak due to paralysis or disease, but fast, high-performance decoding has not yet been demonstrated. Now, transformative new work by researchers at UCSF and UC Berkeley shows that more natural speech decoding is possible using the latest advances in artificial intelligence.
Led by UCSF neurosurgeon Edward Chang, the researchers have developed an implantable AI-powered device that, for the first time, translates brain signals into modulated speech and facial expressions. As a result, a woman who lost the ability to speak due to a stroke was able to speak and convey emotion using a talking digital avatar. The researchers describe their work in a study published today (Wednesday, Aug. 23) in the journal Nature.
Continue reading… “Novel brain implant helps paralyzed woman speak using a digital avatar”
