Society‑in‑the‑Loop: Programming the Algorithmic Social Contract

Recent rapid advances in Artificial Intelli-gence (AI) and Machine Learning have raised many ques-tions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms govern-ing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, ‘SITL= HITL + Social Contract.'

Focus: AI Ethics/Policy
Source: Ethics and Information Technology Journal
Redability: Expert
Type: PDF Article
Open Source: No
Keywords: Ethics , Artificial intelligence, Society, Governance, Regulation
Learn Tags: Ethics Fairness Framework Government Machine Learning
Summary: An article that adds social contract theory to current Human in the Loop (HITL) for monitoring and supervising machine learning systems called Society in the Loop (SITL), where various societal stakeholders are required to monitor the system rather than just the human creator.