Microsoft’s Kinect is being used to bridge the divide between sign language and the spoken word. Researchers at Microsoft hope the device will do for sign language translation what Google has done for language to language translation.
Known as the Kinect Sign Language Translator the software is currently only a research prototype. Despite being in the early stages of development the system can already translate sign language into spoken language and vice versa.
The best part of the program? It translates in real-time!
The program works by having the Kinect capture gestures, while machine learning and pattern recognition programming help interpret the meaning. The program then shows the deaf person and the person who is speaking, allowing for easy viewing from both parties.
Spoken words are turned into visual signs and signs are turned into spoken word.
Taking the Kinect Sign Language Translator software to the next level researchers have already added 300 Chinese sign language words. At this point it takes five people to establish recognition patterns for each word.
Kinect researchers tried data gloves and webcams but ultimately found out what we already knew, the Microsoft Kinect is a wonder device that continues to expand the user experience for motion based computing and interaction.
Guobin Wu, the program manager of the Kinect Sign Language Translator project says there are more than 20 million people in China who are hard of hearing and 360 million such people around the world
The biggest obstacle at this point might be taking the Kinect technology and making it easily transportable for on-the-go use. It doesn’t seem feasible to place every person we talk to in front of a TV set. Perhaps a Google Glass-type translation service with portable Kinect technology will surface in the future.
Do you think the Microsoft Kinect will continue to amaze us with new hacks that were not thought up when the produce first debuted?