OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
Giorgio Visania, Enrico Bagli and Federico Chesania
Abstract of OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
Local Interpretable Model-Agnostic Explanations (LIME) is a popular method to perform interpretability of any kind of Machine Learning (ML) model. It explains one ML prediction at a time, by learning a simple linear model around the prediction. The model is trained on randomly generated data points, sampled from the training dataset distribution and weighted according to the distance from the reference point - the one being explained by LIME. Feature selection is applied to keep only the most important variables, their coefficients are regarded as explanation. LIME is widespread across different domains, although its instability - a single prediction may obtain different explanations - is one of the major shortcomings. This is due to the randomness in the sampling step, as well and determines a lack of reliability in the retrieved explanations, making LIME adoption problematic. In Medicine especially, clinical professionals trust is mandatory to determine the acceptance of an explainable algorithm, considering the importance of the decisions at stake and the related legal issues. In this paper, we highlight a trade-off between explanation’s stability and adherence, namely how much it resembles the ML model. Exploiting our innovative discovery, we propose a framework to maximise stability, while retaining a predefined level of adherence. OptiLIME provides freedom to choose the best adherence-stability trade-off level and more importantly, it clearly highlights the mathematical properties of the retrieved explanation. As a result, the practitioner is provided with tools to decide whether the explanation is reliable, according to the problem at hand. We extensively test OptiLIME on a toy dataset - to present visually the geometrical findings - and a medical dataset. In the latter, we show how the method comes up with meaningful explanations both from a medical and mathematical standpoint.
Introduction to OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
Nowadays Machine Learning (ML) is pervasive and widespread across multiple domains. Medicine makes no difference, on the contrary it is considered one of the greatest challenges of Artificial Intelligence. The idea of exploiting computers to provide assistance to the medical personnel is not new. An historical overview on the topic, starting from the early ‘60s is provided in. More recently, computer algorithms have been proven useful for patients and medical concepts representation, outcome prediction and new phenotype discovery. An accurate overview of ML successes in Health related environments, is provided by Topol in.
Unfortunately, ML methods are hardly perfect and, especially in the medical field where human lives are at stake, Explainable Artificial Intelligence (XAI) is urgently needed. Medical education, research and accountability (“who is accountable for wrong decisions?”) are some of the main topics XAI tries to address. To achieve the explainability, quite a few techniques have been proposed in recent literature. These approaches can be grouped based on different criterion such as i) Model agnostic or model specific ii) Local, global or example based iii) Intrinsic or posthoc iv) Perturbation or saliency based. Among them, model agnostic approaches are quite popular in practice, since the algorithm is designed to be effective on any type of ML model.
LIME is a well-known instance-based, model agnostic algorithm. The method generates data points, sampled from the training dataset distribution andweighted according to distance from the instance being explained. Feature selection is applied to keep only the most important variables and a linear model is trained on the weighted dataset. The model coefficients are regarded as explanation. LIME has already been employed several times in medicine, such as on Intensive Care data and cancer data.
The technique is known to suffer from instability, mainly caused by the randomness introduced in the sampling step. Stability is a desirable property for an interpretable model, whereas the lack of it reduces the trust in the explanations retrieved, especially in the medical field.In our contribution, we review the geometrical idea on which LIME is based upon. Relying on statistical theory and simulations, we highlight a trade-off between the explanation’s stability and adherence, namely howmuch LIME’s simple model resembles theMLmodel. Exploiting our innovative discovery, we propose OptiLIME: a framework to maximise the stability, while retaining a predefined level of adherence. OptiLIME provides both i) freedom to choose the best adherencestability trade-off level and ii) it clearly highlights the mathematical properties of the explanation retrieved. As a result, the practitioner is provided with tools to decide whether each explanation is reliable, according to the problem at hand.
We test the validity of the framework on a medical dataset, where the method comes up with meaningful explanations both from a medical and mathematical standpoint. In addition, a toy dataset is employed to present visually the geometrical findings. The code used for the experiments is available at https://github.com/giorgiovisani/LIME_stability.
ARE YOU A DEVELOPER?
Check out all the resources for TPPs and developers on the Crif Platform development portal.