Understanding ML Model Differences with DeltaXplainer: A Journey into Dynamic Machine Learning Interpretability [Notebook tutorial]

Adam Rida
5 min readDec 30, 2023

--

DeltaXplainer generates rule-based explanations describing differences between models

Introduction: Unraveling Model Discrepancies

Throughout their lifecycle, machine learning (ML) models can be updated and replaced by a new version for different reasons. This can happen because of a performance drop due to concept drift in the underlying phenomenon that is modeled, or even due to business or regulation constraints.

If those ML models are black boxes (i.e. their inner structure is not accessible or too complex to be interpreted), then how can we be sure that, after updating the model, unwanted changes in the predictions made have not occurred?

In the literature, this phenomenon, when two ML models have similar performances and make different predictions for some data samples is known as discrepancies and can expose various risks (fairness, safety, robustness)

In the context of Explainable AI (XAI), very little work has been done when it comes to dynamically studying ML models. The very first workshop around the topic happened in September 2023 at ECML-PKDD in Turin, Italy (Link to workshop).

We present here a method called DeltaXplainer which aims to explain differences between models in a human-understandable way. This article presents the implementation in the form of a Python library of the Deltaxplainer quick overview of a paper that was published at this workshop:

Dynamic Interpretability for Model Comparison via Decision Rules, Adam Rida, Marie-Jeanne Lesot, Xavier Renard, Christophe Marsala, https://arxiv.org/pdf/2309.17095.pdf

All relevant references can be found directly in the paper

Relevant links:

Github: https://github.com/adrida/deltaxplainer

Exploratory Notebook: https://github.com/adrida/deltaxplainer/blob/master/notebooks/get_started.ipynb

PyPI (pip install): https://pypi.org/project/deltaxplainer/

My website: adrida.github.io

Exploring DeltaXplainer: Bridging the Gap

The original intuition behind DeltaXplainer is that, as we are studying a complex distribution (samples where two models differ) we can use XAI methods directly to extract knowledge from those samples. We propose here an approach using one XAI method, global interpretable surrogates.

The idea behind DeltaXplainer is to take as input two black box ML classifiers and output a list of segments where they are making different predictions.

We propose segments in the form of decision rules, as this format is an interpretable way to explain the complex behavior of models.

More details about the approach can be found in the paper but, if we assume access to the training data of the two models, the main steps are the following:

  1. Label the original training data 1 if the two models make different predictions and 0 otherwise
  2. Train a decision tree on this newly labeled data
  3. Extract decision rules from the tree branches leading to class 1
  4. Refine the rules to make them more interpretable

The idea is to use a model considered white-box and interpretable by definition to model all the differences between the two original ML classifiers.

Navigating the Exploratory Notebook

You can start by installing the library, and following the cells in the notebook. Everything should be executable directly.

The notebook is composed of two main sections, the toy example setup and the dynamic explanations.

Section 1: The toy example.

In this section, we create an instance of the famous half moons dataset. The idea is to show a visual representation of how the method works. We consider two models to compare, a decision tree and a random forest. We use the same dataset for both of them and see differences in the predictions made. This corresponds to the first obtained figure:

The left plot represents the two models and the data with the original labels. The right plot represents the samples where the two models make different predictions.

Section 2: Deltaxplainer

This section is pretty much straightforward and aims to show how the library is used.

As we use a decision tree to model the differences, some of its parameters are accessible: the maximum depth, the split criterion (gini/entropy), the minimum sample per lead, and the minimum impurity decrease.

In order to have better control over the interpretability/accuracy trade-off, we suggest using the minimum sample per leaf as a way to control it.

The final rules are generated in a format that can be parsed depending on the use case.

Model differs for the following 3 segments: 
Segment 1: Feature Y > -0.148 and Feature Y <= 0.316 and Feature X > 0.569 and Feature X <= 1.397 | Probability: 100.0% | based on 10 samples
Segment 2: Feature Y > 0.442 and Feature Y <= 0.54 and Feature X <= 0.412 and Feature X > -0.5 | Probability: 100.0% | based on 6 samples
Segment 3: Feature Y > 0.676 and Feature Y <= 0.687 | Probability: 100.0% | based on Feature 1 samples

The probability displayed represents how many samples covered by this rule are actually in class 1 (where f and g disagree).

The last subsection of the notebook shows the same procedure but with different parameters for the decision tree used by the deltaxplainer.

Discussion based on Research Paper Insights

Regarding DeltaXplainer, we showed that the method as such is quite efficient in detecting large and simple changes (that might be induced by a homogenous drift) between models. This comes down to the ability of a decision tree to model areas where the two models differ. Naturally, the inherent limitations of decision trees, in the sense of limited ability to model complex behavior, appear in DeltaXplainer. This limitation materialized itself in the limited ability of DeltaXplainer to capture complex and sparse differences.

Furthermore, we argue that this method could be used jointly with others. A recent paper, by Hinder et al. (https://arxiv.org/abs/2303.09331), proposes to use XAI to study data drift and show where the data has changed. If used before DeltaXplainer, obtained explanations can be confronted to DeltaXplainer’s segments to investigate if the changes observed between a model and its previous version are as expected.

The Road Ahead: Future Enhancements

Multiple enhancements are possible, a full ML model comparison pipeline could be a good direction forward. This could include starting by studying what are the underlying factors behind the changes between two ML models and then using dynXAI methods such as DeltaXplainer to investigate. DeltaXplainer itself can be extended beyond white-box global surrogates, and other XAI methods can be considered too. The format of the segment itself can also be improved to enhance interpretability and human-friendliness.

If you want to get in touch to discuss the paper or have comments, please feel free to reach out to me by email. You will find my address on my website: adrida.github.io

--

--

Adam Rida
Adam Rida

Written by Adam Rida

AI Researcher and Data Scientist - ex Sorbonne University & CNRS - My website: adrida.github.io

No responses yet