Explainable AI – Concepts and application
Explainable AI (XAI) is used in AI systems to make decision-making processes understandable to humans. Methods and techniques are being developed for this purpose. (: 10) Distinction is in inherently interpretable models and black box models.
Inherently interpretable models, such as shallow decision trees or linear regression models, offer inherent
transparency, as their structure and functionality are easy to understand. Black box models, such as neural
networks or random forests, on the other hand, usually deliver higher prediction accuracy, but require additional explanatory mechanisms, known as post-hoc methods, which are added to the models afterwards, as their decision-making logic is not immediately understandable to humans. (: 11)
XAI methods address different dimensions of transparency. Algorithmic transparency provides insight into how the model works, for example through visualisations of internal processes. Result-oriented transparency focuses on explaining specific predictions by highlighting the relevant features.