This course is taught by Dr. Somya Iqbal and will cover the main ideas of model explainability and interpretability in machine learning within the wider umbrella for this domain (XAI). Core concepts in the course will include models which are inherently explainable by design and suited for transparent outputs, what the post hoc possibilities are for explainability in complex models (advances in this area) and some demonstrative use cases and examples noting LIME and Shapley Values (SHAP). The course will include core taught content and demonstrative practical sessions.
- Understanding of how explainability in a model can be defined and demonstrated.
- An exploration of relevant explainable models and associated methods.
- In depth understanding of contexts where explainability is relevant and why it is an important area of research.
- Hands on experience using a case study approach.
Session 1: This session will be held on Friday 1st May from 10:00-12:00 - in-person
- What counts as an explanation?
- Interpretability vs explainability
- Interpretable models
- Local and global post hoc methods
Hands on exercise
Take home mini exercise
Session 2: This session will be held on Friday 8th May from 10:00-12:00 - in-person
- What makes a black-box model?
- Why evaluate a model before explaining it?
- Model-agnostic explanation methods
- Global explanations
- Feature effect explanations
- Local explanations: SHAP/LIME values
- Limits of explanation methods
- Explanation vs causation
Hands-on exercise
- Data (folder with one dataset, adult census data, second data to be called in from Project Gutenberg within the codebook)
- Codebooks (folder with 3 codebooks)
- Slides (after each session, 2 sets of slides)
The taught materials will be shared via slides after each class lecture and code during in class hands-on session will be placed in the code folder. The setup for the hands-on exercise, are as follows:
- Jupyer notebook with Python code
If you are part of the University of Edinburgh you can use [Noteable](https://noteable.edina.ac.uk/) the cloud-based computational notebook system which works on your browser from any device.
1. Open the following link in a new tab: [https://noteable.edina.ac.uk/login](https://noteable.edina.ac.uk/login)
2. Login with your EASE credentials
3. Under 'Standard Python 3 Notebook' click 'Start'
1. From the Noteable home page, click on the 'Git'>'Clone a Repository' button at the top bar of the screen and enter the link of this repo (https://github.com/DCS-training/Explainable_Machine_Learning_XAI.git)
2. Now click on Clone
3. You now have imported the full repo and you can see all the material
4. Double-click to open the relevant Notebook when instructed in class. Notebooks will be numbered by session.
5. Follow the instruction on the Notebook
Noteable is the recommended mode for University members since your EASE credentials provide easy access to the analysis and avoid further installation of any software or tools locally.
- Molnar, C. (2025). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (3rd ed.). christophm.github.io/interpretable-ml-book/
- Recap of ML Foundations via Alan Turing Institute - Responsible AI - includes code walk through & quizzes.