Logic-based explainability in machine learning

J Marques-Silva - … Knowledge: 18th International Summer School 2022 …, 2023 - Springer
The last decade witnessed an ever-increasing stream of successes in Machine Learning
(ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …

Solving explainability queries with quantification: The case of feature relevancy

X Huang, Y Izza, J Marques-Silva - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
Trustable explanations of machine learning (ML) models are vital in high-risk uses of
artificial intelligence (AI). Apart from the computation of trustable explanations, a number of …

On the explanatory power of Boolean decision trees

G Audemard, S Bellart, L Bounia, F Koriche… - Data & Knowledge …, 2022 - Elsevier
Decision trees have long been recognized as models of choice in sensitive applications
where interpretability is of paramount importance. In this paper, we examine the …

Axiomatic aggregations of abductive explanations

G Biradar, Y Izza, E Lobo, V Viswanathan… - Proceedings of the AAAI …, 2024 - ojs.aaai.org
The recent criticisms of the robustness of post hoc model approximation explanation
methods (like LIME and SHAP) have led to the rise of model-precise abductive explanations …

No silver bullet: interpretable ML models must be explained

J Marques-Silva, A Ignatiev - Frontiers in artificial intelligence, 2023 - frontiersin.org
Recent years witnessed a number of proposals for the use of the so-called interpretable
models in specific application domains. These include high-risk, but also safety-critical …

On the failings of Shapley values for explainability

X Huang, J Marques-Silva - International Journal of Approximate …, 2024 - Elsevier
Abstract Explainable Artificial Intelligence (XAI) is widely considered to be critical for building
trust into the deployment of systems that integrate the use of machine learning (ML) models …

Computing abductive explanations for boosted trees

G Audemard, JM Lagniez, P Marquis… - International …, 2023 - proceedings.mlr.press
Boosted trees is a dominant ML model, exhibiting high accuracy. However, boosted trees
are hardly intelligible, and this is a problem whenever they are used in safety-critical …

Logic-based explainability: past, present and future

J Marques-Silva - International Symposium on Leveraging Applications of …, 2024 - Springer
In recent years, the impact of machine learning (ML) and artificial intelligence (AI) in society
has been absolutely remarkable. This impact is expected to continue in the foreseeable …

A uniform language to explain decision trees

M Arenas, P Barceló, D Bustamante… - Proceedings of the …, 2024 - proceedings.kr.org
The formal XAI community has studied a plethora of interpretability queries aiming to
understand the classifications made by decision trees. However, a more uniform …

Abductive explanations of classifiers under constraints: Complexity and properties

M Cooper, L Amgoud - arXiv preprint arXiv:2409.12154, 2024 - arxiv.org
Abductive explanations (AXp's) are widely used for understanding decisions of classifiers.
Existing definitions are suitable when features are independent. However, we show that …