There is a new viral video going around involving a “voiceless phone call”. Tom Simonite writes on NewScientist.com:
A neckband that translates thought into speech by picking up nerve signals has been used to demonstrate a "voiceless" phone call for the first time.
With careful training a person can send nerve signals to their vocal cords without making a sound. These signals are picked up by the neckband and relayed wirelessly to a computer that converts them into words spoken by a computerised voice.
The system demonstrated at the TI conference can recognise only a limited set of about 150 words and phrases, says Callahan, who likens this to the early days of speech recognition software.
At the end of the year Ambient plans to release an improved version, without a vocabulary limit. Instead of recognising whole words or phrases, it should identify the individual phonemes that make up complete words.
I have no clue how this actually works (there’s an HMM in there somewhere, right?), but its implications for models of speech production ought to be significant. The folks over at Haskins Lab ought to be interested, I should think.
(HT Andrew Sullivan)
Here's the video. Cool stuff.