Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need

The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of realworld needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we conduct the first systematic investigation of commercial product teams’ challenges and needs for support in developing fairer ML systems. We identify areas of alignment and disconnect between the challenges faced by teams in practice and the solutions proposed in the fair ML research literature. Based on these findings, we highlight directions for future ML and HCI research that will better address practitioners’ needs.

Focus: AI Ethics/Policy
Source: CHI 2019
Redability: Expert
Type: PDF Article
Open Source: No
Keywords: algorithmic bias, fair machine learning, product teams, needfinding, empirical study, UX of machine learnin
Learn Tags: Bias Business Design/Methods Ethics Fairness Inclusive Practice Solution
Summary: Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, the first systematic investigation of commercial product teams’ challenges and needs for support in developing fairer machine learning systems is presented.