New study proposes a standardization approach to identify trustworthy AI  

A new study realised by the German technology organisation VDE and the Bertelsmann Stiftung demonstrates how ethical principles for artificial intelligence (AI) can be put into practice. The study, “From principles to practice – an interdisciplinary framework to operationalise AI ethics”, proposes a standardization approach to help consumers identify the level of trust of an AI product/service.
While there are many ethical guidelines for AI currently being developed, there are very few solutions that can be practically implemented. One of the greatest obstacles is the vagueness and the varying understandings of principles like “transparency” and “equity”. The VDE-Bertelsmann Study aims at filling this gap. It proposes a method to implement general ethical principles in AI measurably and concretely based on a combination of three tools: a VCIO model, an AI ethics label and a risk classification.

The so-called VCIO model (Value, Criteria, Indicators, Observables) breaks down values into criteria, indicators and, ultimately, measurable observables. The VCIO model can be used by policy developers, regulators and supervisory authorities to concretise and implement AI system requirements. Secondly, the ethics label for AI systems proposed in the study allows companies to communicate the ethical properties of their products clearly and uniformly. The label is based on the successful energy labelling for electrical appliances and allows both consumers and companies to compare the products available on the market.

Finally, the third element of the model, the risk matrix, aims to help with the classification of AI application cases. Which AI system properties are considered “ethically adequate” depends on the specific application case. The risk matrix presented by the study therefore presents an approach for classifying the application context. The risk matrix consists of a classification into classes 0, 1, 2 or 3, taking into account the intensity of the potential damage and the dependence of the affected persons or their ability to circumvent an AI decision, or select another system. While the lowest class 0 does not require any further ethical considerations, AI systems within classes 1 to 3 can only be used if they have been given the AI ethics label.

The study, “From principles to practice – an interdisciplinary framework to operationalise AI ethics” can be downloaded for free at www.vde.com/presse or www.ai-ethics-impact.org.

More information on CEN and CENELEC’s standardization activities on the field of AI can be found here.