Apple is working on several new AI-enhanced accessibility features to help users with visual, mobile, vocal, and cognitive disabilities.
Apple announces plans to add AI capabilities to 3 cognitive accessibility features
Several new accessibility features and updates are on the way to iPhone and iPad, with Apple using hardware, software and machine learning to make them happen. “These groundbreaking features were designed with feedback from members of the disability community every step of the way,” said Sarah Herrlinger, Apple’s senior director of Global Accessibility Policy and Initiatives, in the announcement, “to support a diverse user base and help people connect in new ways.”
Assistive Access, developed in collaboration with people with cognitive disabilities and their supporters, streamlines the interfaces of some apps to “lighten the cognitive load.” It also combines similar apps (like Phone and FaceTime) into one, while also enabling a number of settings to better cater to the needs of a specific user. These include an emoji-only keyboard for those who prefer more visual communication, the option to organize the home screen into rows (for those who prefer text), and so on.
Live Speech is designed to support non-verbal users (or users who have lost their ability to speak over time) by offering pre-recorded response options and text-to-speech during conversations. And a Personal Voice tool can record 15 minutes of audio (suggested text read aloud by the user) and then use that data to generate speech in the user's own voice.