“It was easy, fun, and I’d do it all over again.” I know those are not the typical words used by someone who just finished participating in a research study, especially one focused on ALS. But that’s exactly how I felt last week when I pressed the submit button on my final voice recording for the Speech Accessibility Project.
Readers of this column will know that, because I have ALS, one of my quests is to prevent my ALS-related dysarthria from robbing me of my ability to speak. My other quest has been to support efforts to improve how voice-activated devices respond to voices like mine.
Dysarthria feels like having a bad case of laryngitis and a lazy tongue that’s two steps behind what my mind wants to say. When I speak with family members and friends, they allow me the extra time it takes for me to pronounce multisyllable words or long sentences. But voice-activated devices have no such patience. I’m cut off midword, and the device searches for what it thought I said. For example, if I ask for “the best restaurants in Tucson, Arizona,” I often end up with instructions on how to get “the best rest.”
I’m sure my fellow sufferers of dysarthria can share much funnier stories than mine about their interactions with voice activation.
[recommended-reading id=82022]
The devices, however, are not to blame, and software designers don’t set out to build systems that ignore certain demographic groups or voice variations. The shortcomings result from how many and what types of voices were used to “train” the software. Giving it additional samples of irregular speech helps it recognize and better understand what a wider range of voices is saying.
That’s why when I first read about the Speech Accessibility Project in an ALS News Today article, I said, “Count me in.”
What happened next
I began by meeting online with the project’s speech-language pathologist, so she could assess my speech and explain how I’d make the online recordings. We reviewed the consent form, and she addressed my questions. I learned that I had three months to complete my recordings, but I could go at my own pace, stopping and starting again at the point where I’d left off.
Once I began doing my recordings, I discovered the sessions were set up as if I were playing an online game, complete with a congratulatory message each time I finished a designated level. The levels varied from containing typical single-word commands such as “stop,” “listen,” or “pause” to phrases like “What’s the temperature in Cincinnati, Ohio?” (I had to take in a big breath of air for that one!)
The Speech Accessibility Project still needs voice recordings from people besides those with ALS. People with other neurological conditions, such as Parkinson’s disease, Down syndrome, stroke, aphasia, and cerebral palsy are invited to apply. If you’re interested in participating, just follow the link above.
Let’s help technology learn more about ALS. If computers learn how to understand us better, I believe we can continue to live well while living with ALS.
DAN Note: ALS News Today is strictly a news and information website about the disease. It does not provide medical advice, diagnosis, or treatment. This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read on this website. The opinions expressed in this column are not those of ALS News Today or its parent company, BioNews, and are intended to spark discussion about issues pertaining to ALS.