AI models to be improved to understand those with speech disabilities

The University of Illinois (UIUC) has partnered with big tech companies and nonprofits on the Speech Accessibility Project. The aim is to improve voice recognition for communities with disabilities and diverse speech patterns often not considered by AI algorithms.
‘Speech interfaces should be available to everybody, and that includes people with disabilities’, UIUC professor Mark Hasegawa-Johnson said. ‘This task has been difficult because it requires a lot of infrastructure, ideally the kind that can be supported by leading technology companies, so we’ve created a uniquely interdisciplinary team with expertise in linguistics, speech, AI, security, and privacy’.
To include communities of people with disabilities like Parkinson’s, Lou Gehrig’s disease (ALS), cerebral palsy, Down syndrome, and other diseases that affect speech, the Project will collect speech samples from individuals representing a diversity of speech patterns. The UIUC will recruit paid volunteers to contribute voice samples and help create a ‘private, de-identified dataset that can be used to train machine learning models. The group will focus on American English at the start.