Unintended Machine Learning Biases as Social Barriers for Persons with Disabilities

Persons with disabilities face many barriers to full participation in society, and the rapid advancement of technology has the potential tocreate ever more. Building equitable and inclusive technologies for people with disabilities demands paying attention to more than accessibility, but also to how social attitudes towards disability are represented within technology. Representations perpetuated by machine learning (ML) models often inadvertently encode undesirable social biases from the data on which they are trained. This can result, for example, in text classification models producing very different predictions for I am a person with mental illness, and I am a tallperson. In this paper,we present evidence of such biases in existing ML models, and in data used for model development. First, we demonstrate that a machine-learned model to moderate conversations classifies texts which mention disability as more “toxic”. Similarly, a machine-learned sentiment analysis model rates texts which mention disability as more negative. Second,we demonstrate that neural text representation models that are critical to many ML applications can also contain undesirable biases towards mentions of disabilities. Third, we show that the data used to develop such models reflects topical biases in social discourse which may explain such biases in the models—for instance, gun violence, homelessness, and drug addiction are over-represented in discussions about mental illness

Focus: AI and Disability/Outliers
Source: CHI 2016
Redability: Expert
Type: PDF Article
Open Source: No
Keywords: N/A
Learn Tags: Bias Disability Fairness Machine Learning
Summary: A research paper about barriers and issues of fairness faced by persons with disabilities due to the social biases present in machine learning natural language processing models.