Understanding the Effect of Accuracy on Trust in Machine Learning Models

We address a relatively under-explored aspect of human–computer interaction: people’s abilities to understand the relationship between a machine learning model’s stated performance on held-out data and its expected performance post deployment. We conduct large-scale, randomized human-subject experiments to examine whether lay people’s trust in a model, measured in terms of both the frequency with which they revise their predictions to match those of the model and their self-reported levels of trust in the model, varies depending on the model’s stated accuracy on held-out data and on its observed accuracy in practice. We find that people’s trust in a model is affected by both its stated accuracy and its observed accuracy, and that the effect of stated accuracy can change depending on the observed accuracy. Our work relates to recent research on interpretable machine learning, but moves beyond the typical focus on model internals, exploring a different component of the machine learning pipeline

Focus: AI Ethics/Policy
Source: CHI 2019
Redability: Expert
Type: PDF Article
Open Source: No
Keywords: Machine learning, trust, human-subject experiments
Learn Tags: Design/Methods Machine Learning Solution Trust
Summary: An article that explores how a randomized human subject experiment found the average person's trust in a machine learning model is affected by the model's stated accuracy level and observation of the model in practice.