According to National Institute on Deafness and Other Communication Disorders, approximately 7.5 million people in the U.S. have trouble using their voices. This group is at the risk of being left behind by voice-recognition technology. But we are in 2021 – the era to make technology more accessible to everyone. And, tech firms, including Apple and Google are working on improving their voice assistants to understand atypical speech. They are now trying to train voice assistants to understand everyone.
“For someone who has cerebral palsy and is in a wheelchair, being able to control their environment with their voice could be super useful to them,” said Ms. Cattiau. Google is collecting atypical speech data as part of an initiative to train its voice-recognition tools. Training the voice assistants like Siri and Google Assistant could improve the voice-recognition experience for a number of groups including senior with degenerative diseases.
Apple debuted its Hold to Talk feature on hand-held devices in 2015. It gives users control over how long they want the voice assistant Siri to listen to them. The feature prevents the assistant from interrupting users that have a stutter before they have finished speaking. Now, Apple is working to help Siri automatically detect if someone speaks with a stutter. The company has built a bank of 28,000 audio clips from podcasts featuring stuttering to help its assistant recognize atypical speech.
Google’s Project Euphoria is the company’s initiative where it is testing a prototype app that lets people with atypical speech communicate with Google Assistant and smart Google Home products. It aims to train the software to understand unique speech patterns. The company hopes that these snippets will help train its artificial intelligence in the full spectrum of speech.
Amazon isn’t far off with its Alexa voice assistant. The company announced Alexa integration with Voiceitt, which lets people with speech impairments train an algorithm to recognize their own unique vocal patterns.