My Algorithms Have Determined You’re Not Human: AI-ML, Reverse Turing-Tests, and the Disability Experience

The past decade has seen an exponential growth in the capabilities and deployment of artificial intelligence systems based on deep neural networks. These are visible through the speech recognition and natural language processing of Alexa/Siri/Google that structure many of our everyday interactions, and the promise of SAE Level 5 autonomous driving provided by Tesla and Waze. Aside from these shiny and visible applications of AI-ML are many other uses that are more subtle: AI-ML is now being used to screen job applicants as well as determine which web ads we are shown. And while many vendors of AI-ML technologies have promised that these tools provide for greater access and freedom from human prejudice, disabled users have found that these tools can embed and deploy newer, subtler forms of discrimination against disabled people. At their worst, AIML systems can deny disabled people their humanity.

Focus: AI and Disability/Outliers
Source: ASSETS 2019
Redability: Expert
Type: PDF Article
Open Source: No
Keywords: Artificial intelligence, deep neural networks, disabilities, race, bias
Learn Tags: Bias Data Collection/Data Set Design/Methods Disability Ethics Inclusive Practice Machine Learning Small Data
Summary: A keynote presentation from the 2019 ASSETS conference that suggests diversity and inclusion need to introduced at the start of the AI-ML design process, rather than as an afterthought.