
As a remedy against the black-box nature of many machine learning models, EU legislation holds that AI must be transparent and explainable (‘responsible AI’). In response, scientists have proposed two (so-called XAI) solutions that are still under discussion. This timely book charts the fierce debates in law, medicine, and finance about the use of black boxes and the suitability of XAI tools.
A subsequent analysis of EU law and case law on data protection and AI governance reveals that the ‘right to explanation’ has gradually been strengthened. However, the ongoing discussion about XAI tools is still largely ignored. Legislation is therefore far from satisfactory and leaves citizens seeking explanations empty-handed. This book analyses the debates in the machine learning community and how they could inform legislation and regulation. It thereby bridges computer science and law and helps reduce the disconnect between the two communities.
AI Transparency & Explainability is a pivotal resource for scholars and students of the connections between technology law and machine learning. AI regulators and government policymakers will similarly benefit from its interdisciplinary scope.