GLIME: general, stable and local LIME explanation

Z Tan, Y Tian, J Li - Advances in Neural Information …, 2024 - proceedings.neurips.cc
As black-box machine learning models become more complex and are applied in high-
stakes settings, the need for providing explanations for their predictions becomes crucial …

Global aggregations of local explanations for black box models

I Van Der Linden, H Haned, E Kanoulas - arXiv preprint arXiv:1907.03039, 2019 - arxiv.org
The decision-making process of many state-of-the-art machine learning models is inherently
inscrutable to the extent that it is impossible for a human to interpret the model directly: they …

Reliable post hoc explanations: Modeling uncertainty in explainability

D Slack, A Hilgard, S Singh… - Advances in neural …, 2021 - proceedings.neurips.cc
As black box explanations are increasingly being employed to establish model credibility in
high stakes settings, it is important to ensure that these explanations are accurate and …

XAI-TRIS: Non-linear benchmarks to quantify ML explanation performance

B Clark, R Wilming, S Haufe - arXiv preprint arXiv:2306.12816, 2023 - arxiv.org
The field of'explainable'artificial intelligence (XAI) has produced highly cited methods that
seek to make the decisions of complex machine learning (ML) methods' understandable'to …

[HTML][HTML] CLEVR-XAI: A benchmark dataset for the ground truth evaluation of neural network explanations

L Arras, A Osman, W Samek - Information Fusion, 2022 - Elsevier
The rise of deep learning in today's applications entailed an increasing need in explaining
the model's decisions beyond prediction performances in order to foster trust and …

MeLIME: meaningful local explanation for machine learning models

T Botari, F Hvilshøj, R Izbicki… - arXiv preprint arXiv …, 2020 - arxiv.org
Most state-of-the-art machine learning algorithms induce black-box models, preventing their
application in many sensitive domains. Hence, many methodologies for explaining machine …

" Why should you trust my explanation?" understanding uncertainty in LIME explanations

Y Zhang, K Song, Y Sun, S Tan, M Udell - arXiv preprint arXiv:1904.12991, 2019 - arxiv.org
Methods for interpreting machine learning black-box models increase the outcomes'
transparency and in turn generates insight into the reliability and fairness of the algorithms …

s-LIME: Reconciling Locality and Fidelity in Linear Explanations

R Gaudel, L Galárraga, J Delaunay, L Rozé… - … on Intelligent Data …, 2022 - Springer
The benefit of locality is one of the major premises of LIME, one of the most prominent
methods to explain black-box machine learning models. This emphasis relies on the …

[HTML][HTML] Glocalx-from local to global explanations of black box ai models

M Setzu, R Guidotti, A Monreale, F Turini… - Artificial Intelligence, 2021 - Elsevier
Artificial Intelligence (AI) has come to prominence as one of the major components of our
society, with applications in most aspects of our lives. In this field, complex and highly …

Trade-off between efficiency and consistency for removal-based explanations

Y Zhang, H He, Z Tan, Y Yuan - Advances in Neural …, 2024 - proceedings.neurips.cc
In the current landscape of explanation methodologies, most predominant approaches, such
as SHAP and LIME, employ removal-based techniques to evaluate the impact of individual …