作者
Gabriel Laberge, Yann Batiste Pequignot, Mario Marchand, Foutse Khomh
发表日期
2024/4/18
研讨会论文
International Conference on Artificial Intelligence and Statistics
页码范围
2017-2025
出版商
PMLR
简介
The XAI Disagreement Problem concerns the fact that various explainability methods yield different local/global insights on model behavior. Thus, given the lack of ground truth in explainability, practitioners are left wondering “Which explanation should I believe?”. In this work, we approach the Disagreement Problem from the point of view of Functional Decomposition (FD). First, we demonstrate that many XAI techniques disagree because they handle feature interactions differently. Secondly, we reduce interactions locally by fitting a so-called FD-Tree, which partitions the input space into regions where the model is approximately additive. Thus instead of providing global explanations aggregated over the whole dataset, we advocate reporting the FD-Tree structure as well as the regional explanations extracted from its leaves. The beneficial effects of FD-Trees on the Disagreement Problem are demonstrated on toy and real datasets.
引用总数
学术搜索中的文章
G Laberge, YB Pequignot, M Marchand, F Khomh - International Conference on Artificial Intelligence and …, 2024