Delivering trustworthy AI through formal XAI
J Marques-Silva, A Ignatiev - Proceedings of the AAAI Conference on …, 2022 - ojs.aaai.org
The deployment of systems of artificial intelligence (AI) in high-risk settings warrants the
need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and …
need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and …
On tackling explanation redundancy in decision trees
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
The interpretability of decision trees motivates explainability approaches by so-called …
The interpretability of decision trees motivates explainability approaches by so-called …
Logic-based explainability in machine learning
J Marques-Silva - … Knowledge: 18th International Summer School 2022 …, 2023 - Springer
The last decade witnessed an ever-increasing stream of successes in Machine Learning
(ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …
(ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …
On computing probabilistic explanations for decision trees
Formal XAI (explainable AI) is a growing area that focuses on computing explanations with
mathematical guarantees for the decisions made by ML models. Inside formal XAI, one of …
mathematical guarantees for the decisions made by ML models. Inside formal XAI, one of …
Explanations for Monotonic Classifiers.
J Marques-Silva, T Gerspacher… - International …, 2021 - proceedings.mlr.press
In many classification tasks there is a requirement of monotonicity. Concretely, if all else
remains constant, increasing (resp. ádecreasing) the value of one or more features must not …
remains constant, increasing (resp. ádecreasing) the value of one or more features must not …
Solving explainability queries with quantification: The case of feature relevancy
Trustable explanations of machine learning (ML) models are vital in high-risk uses of
artificial intelligence (AI). Apart from the computation of trustable explanations, a number of …
artificial intelligence (AI). Apart from the computation of trustable explanations, a number of …
The inadequacy of Shapley values for explainability
X Huang, J Marques-Silva - arXiv preprint arXiv:2302.08160, 2023 - arxiv.org
This paper develops a rigorous argument for why the use of Shapley values in explainable
AI (XAI) will necessarily yield provably misleading information about the relative importance …
AI (XAI) will necessarily yield provably misleading information about the relative importance …
Tractable explanations for d-DNNF classifiers
Compilation into propositional languages finds a growing number of practical uses,
including in constraint programming, diagnosis and machine learning (ML), among others …
including in constraint programming, diagnosis and machine learning (ML), among others …
On efficiently explaining graph-based classifiers
Recent work has shown that not only decision trees (DTs) may not be interpretable but also
proposed a polynomial-time algorithm for computing one PI-explanation of a DT. This paper …
proposed a polynomial-time algorithm for computing one PI-explanation of a DT. This paper …
On the explanatory power of Boolean decision trees
Decision trees have long been recognized as models of choice in sensitive applications
where interpretability is of paramount importance. In this paper, we examine the …
where interpretability is of paramount importance. In this paper, we examine the …