Keep your friends close and your counterfactuals closer: Improved learning from closest rather than plausible counterfactual explanations in an abstract setting

U Kuhl, A Artelt, B Hammer - Proceedings of the 2022 ACM Conference …, 2022 - dl.acm.org
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and …, 2022dl.acm.org
Counterfactual explanations (CFEs) highlight changes to a model's input that alter its
prediction in a particular way. s have gained considerable traction as a psychologically
grounded solution for explainable artificial intelligence (XAI). Recent innovations introduce
the notion of plausibility for automatically generated s, enhancing their robustness by
exclusively creating plausible explanations. However, practical benefits of this constraint on
user experience are yet unclear. In this study, we evaluate objective and subjective usability …
Counterfactual explanations (CFEs) highlight changes to a model’s input that alter its prediction in a particular way. s have gained considerable traction as a psychologically grounded solution for explainable artificial intelligence (XAI). Recent innovations introduce the notion of plausibility for automatically generated s, enhancing their robustness by exclusively creating plausible explanations. However, practical benefits of this constraint on user experience are yet unclear. In this study, we evaluate objective and subjective usability of plausible s in an iterative learning task. We rely on a game-like experimental design, revolving around an abstract scenario. Our results show that novice users benefit less from receiving plausible rather than closest s that induce minimal changes leading to the desired outcome. Responses in a post-game survey reveal no differences for subjective usability between both groups. Following the view of psychological plausibility as comparative similarity, users in the closest condition may experience their s as more psychologically plausible than the computationally plausible counterpart. In sum, our work highlights a little-considered divergence of definitions of computational plausibility and psychological plausibility, critically confirming the need to incorporate human behavior, preferences and mental models already at the design stages of XAI. All source code and data of the current study are available: https://github.com/ukuhl/PlausibleAlienZoo
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果