Explainable Artificial Intelligence for neural networks and its evaluation

November 26, 2025   

This is a Ph.D. level course on the topic of the evaluation of the explanations provided by eXplainable Artificial Intelligence (XAI) methods for deep neural networks.

The course is hosted by TU Delft and is given in the context of the AIDA Doctoral Academy, but is accessible remotely, free-of-charge for everyone.

The lectures will be taught from January 12th to January 23rd, 2026.


Lecturers

Dr. Marco Zullich (TU Delft, the Netherlands)

Emily Schiller (UC Cork, Ireland & Xitaso GmbH, Germany)


Abstract

In the last decade, the GDPR and the EU AI Act have formalized the concept of transparency in the context of AI models. Transparency, in the narrower sense of interpretability of the predictive logics of a model, can be achieved through white box models—i.e. models whose low complexity makes them human-interpretable. However, these models often lack the predictive power that black box models, such as Neural Networks, possess. Despite their low degree of interpretability, an approximate understanding of the predictive dynamics of these models can be achieved by means of the tools provided by Explainable AI (XAI).

However, a crucial aspect of these tools is the overall difficulty in evaluating, in a formal functional way, the quality of their outputs, which generally undermines trust in them and severely hinders their applicability to safety-critical applications. This course aims at providing an introductory overview on XAI, with specific attention on Neural Networks explainability, then focusing on the various aspects of what defines quality in the context of XAI.


Course program

The course is composed of three main modules:

  1. Frontal lectures (12 hours): these are accessible in-person at TU Delft and online via live stream. The lectures will cover the foundations of XAI and its evaluation.
  2. Essay writing (ca. 3 hours): each student will write a short essay on a topic related to XAI evaluation (possibly connected to their Ph.D. research topic).
  3. Group discussion (ca. 3 hours) - only available in-person: students will be divided into small groups and will reflect on the outcome of their essays, possibly proposing new ideas and research directions.

Note: attendance to the group discussion is only mandatory for TU Delft Ph.D. participants. For AIDA or external students, if a formal assessment is needed, the essay writing can be replaced by a 30-min multiple choice exam.

Schedule

Note: locations for in-person attendees will be communicated in due time.

Date Time (CET) Room Topic Lecturer
12 Jan 2026 15:00-17:00 TPM-Instruction D2 Introduction to XAI Dr. Marco Zullich
14 Jan 2026 15:00-17:00 TPM-Instruction D2 Model-agnostic feature importance Dr. Marco Zullich
16 Jan 2026 12:45-15:45 TPM-Hall J XAI for Neural Networks Dr. Marco Zullich
19 Jan 2026 15:00-17:00 TPM-Instruction D1 Counterfactual example & Intro to XAI evaluation Dr. Marco Zullich
21 Jan 2026 15:00-17:00 TPM-Instruction D2 Functional evaluation of XAI tools Dr. Marco Zullich
23 Jan 2026 15:00-17:00 TPM-Instruction D2 Uncertainty evaluation & XAI Emily Schiller
Feb 2026 TBA Group discussion

Enrollment

Enrollment closed as of Monday, January 5 2026.

Enroll via this form: https://forms.office.com/e/eLYUpwYCS3.


Lecturers bios

To be added soon.