Researchers at MIT have created a computer system that transcribes words users speak silently, according to reports. The computer interface can transcribe words that the user verbalizes internally but does not actually speak aloud, according to MIT News. The system includes a wearable device, which runs from the back of the ears to the jaw and a computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations undetectable to the human eye. MIT News reports that the signals are fed to a machine-learning system that has been trained to associate particular signals with particular words.
The device includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. The headphones enable the system to pass information to the wearer without interrupting conversations or otherwise interfering with the user’s auditory experience, according to the report. Lead researcher Arnav Kapur said, “The motivation for this was to build an IA device – an intelligence-augmentation device.” Kapur, a graduate student at the MIT Media Lab led the development of the new system.
“Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”
The device called AlterEgo is part of a complete silent-computing system that lets the user silently give information and receive feedback. In an experiment conducted by the researchers, subjects used the system to silently report opponents’ moves in a chess game and silently receive computer-recommended responses, MIT News reports. Arnav Kapur is the first author on the paper; Pattie Maes is the senior author and Shreyas Kapur, and an undergraduate major in electrical engineering and computer science.
— CNET (@CNET) April 6, 2018
According to CNET, in trials involving 15 participants, the device gave an accurate transcription rate of 92 percent. Science Alert compares the device to a myoelectric prosthetic that sends electrical signals from the brain to the muscles, telling them what to do. Speaking is more complex but relies on the same principle. However, the device requires a calibration for every user because our neuromuscular signals will be slightly different. The system would have to learn the wearer’s accent to work.
Once the word-signal configurations are programmed into the device, it can retain that information making it easier for new users. The team is currently collecting data on more complex conversations to try and expand the system’s capabilities, according to Science Alert. The researchers presented a paper describing the device at the Association for Computing Machinery’s ACM Intelligent User Interface conference that was held in Japan on March 7-11. A demonstration of the device is available on YouTube through this link.