Toward Accountable Discrimination-Aware Data Mining: The Importance of Keeping the Human in the Loop — and under the Looking-Glass

“Big Data” and data-mined inferences are affecting more and more of our lives, and concerns about their possible discriminatory effects are growing. Methods for discrimination-aware data mining and fairness-aware data miningaim at keeping decision processes supported by information technology free from unjust grounds.However, these formal approaches alone are not sufficient to solve the problem. In the present article, we describe reasons why discrimination with data can and typically does arise through the combined effects of human and machine-based reasoning, and argue that this requires a deeper understanding of the human side of decision making with data mining. We describe results from a large-scale human-subjects experiment that investigated such decision making, analysing the reasoning that participants reported during their task to assess whether a loan request should or would be granted. We derive data protection by design strategies for making decisionmaking discrimination-awarein an accountable way, grounding these requirements in the accountability principle of the EU General Data Protection Regulation, and outline how their implementations can integrate algorithmic, behavioural, and user-interface factors

Focus: AI Ethics/Policy
Source: Big Data
Redability: Expert
Type: PDF Article
Open Source: No
Keywords: Discrimination discovery and prevention, Discrimination-aware and fairness-aware data mining, Accountability, Data protection, Evaluation, User studies
Learn Tags: Bias Design/Methods Ethics Fairness
Summary: This paper further discusses discrimination-aware data mining practices and the role of humans in (semi-) automated decision making to support accountable data processing design strategies that support reflection and avoid unconscious bias.