NASA engineers are developing technology that picks up and translates throat signals into words before they’re even spoken.


Nancy Pinsker ticks off the names of five blue items, noting last the sky. She’s better at this than she was five years ago when a stroke put a stranglehold on her vocal muscles, altering her speech. Since then, she’s been chipping away at reclaiming the capacity for normal conversation.



“I feel if I don’t come here and take lessons, I’ll lose my voice,” says the retired bookkeeper, now in her seventies. “Even now, it’s not perfect.”



For people like Pinsker who find it hard to engage in conversation, a host of new technology awaits. “There’s just been an explosion,” says Stephen Cavallo, a speech-language pathologist and associate professor at Lehman College, where Pinsker frequents the Speech and Hearing Center.



Now NASA researchers are taking a leap in the direction of deciphering speech. Neuroengineer Chuck Jorgensen told Discover Magazine that he’s bypassing the physical body’s normal requirements by delivering words via machine using subvocal speech. “When you’re reading material…sometimes you find that your tongue or your lips are quietly moving but you’re not making an audible sound,” he explains. “And it’s doing that because there’s this electronic signal that’s being sent to produce that speech but you’re intercepting it so it doesn’t really say it out loud. That’s subvocal speech.”



In a lab at NASA’s Ames Research Center, electrodes similar to those used in a doctor’s office cling below Jorgensen’s chin and flank his Adam’s apple, picking up electronic signals that the body sends to vocal chords. Jorgensen amplifies the signals and uses neural network software to decipher word patterns.



More here.