Toward Situated Interventions for Algorithmic Equity: Lessons from the Field
Research to date aimed at the fairness, accountability, and transparency of algorithmic systems has largely focused on topics such as identifying failures of current systems and on technical interventions intended to reduce bias in computational processes. Researchers have given less attention to methods that account for the social and political contexts of specific, situated technical systems at their points of use. Co-developing algorithmic accountability interventions in communities supports outcomes that are more likely to address problems in their situated context and re-center power with those most disparately affected by the harms of algorithmic systems. In this paper we report on our experiences using participatory and co-design methods for algorithmic accountability in a project called the Algorithmic Equity Toolkit. The main insights we gleaned from our experiences were: (i) many meaningful interventions toward equitable algorithmic systems are non-technical; (ii) community organizations derive the most value from localized materials as opposed to what is “scalable” beyond a particular policy context; (iii) framing harms around algorithmic bias suggests that more accurate data is the solution, at the risk of missing deeper questions about whether some technologies should be used at all. More broadly, we found that community-based methods are important inroads to addressing algorithmic harms in their situated contexts