From Principles to Practice: How Can We Make AI Ethics Measurable?

Discussions about the societal consequences of algorithmic decision-making systems are omnipresent. A growing number of guidelines for the ethical development of so-called artificial intelligence (AI) have been put forward by stakeholders from the private sector, civil society, and the scientific and policymaking spheres. The Bertelsmann Stiftung’s Algo.Rules are among this body of proposals. However, it remains unclear how organizations that develop and deploy AI systems should implement precepts of this kind. In cooperation with the nonprofit VDE standards-setting organization, we are seeking to bridge this gap with a new working paper that demonstrates how AI ethics principles can be put into practice.

Focus: AI Ethics/Policy
Source: Ethics of Algorithms
Redability: Intermediate
Type: Website Article
Open Source: Yes
Keywords: N/A
Learn Tags: Data Tools Design/Methods Ethics Fairness Framework
Summary: This working paper proposes the creation of an ethics label for AI systems that could be used by AI developers to communicate the quality of their products according to six key values: transparency, accountability, privacy, justice, reliability and environmental sustainability.