Towards explainable evaluation metrics for machine translation

C Leiter, P Lertvittayakumjorn, M Fomicheva… - Journal of Machine …, 2024 - jmlr.org
Unlike classical lexical overlap metrics such as BLEU, most current evaluation metrics for
machine translation (for example, COMET or BERTScore) are based on black-box large …

Saliency map verbalization: Comparing feature importance representations from model-free and instruction-based methods

N Feldhus, L Hennig, MD Nasert, C Ebert… - arXiv preprint arXiv …, 2022 - arxiv.org
Saliency maps can explain a neural model's predictions by identifying important input
features. They are difficult to interpret for laypeople, especially for instances with many …

Attribution-based explanations that provide recourse cannot be robust

H Fokkema, R De Heide, T Van Erven - Journal of Machine Learning …, 2023 - jmlr.org
Different users of machine learning methods require different explanations, depending on
their goals. To make machine learning accountable to society, one important goal is to get …

Why we do need explainable ai for healthcare

G Cinà, T Röber, R Goedhart, I Birbil - arXiv preprint arXiv:2206.15363, 2022 - arxiv.org
The recent spike in certified Artificial Intelligence (AI) tools for healthcare has renewed the
debate around adoption of this technology. One thread of such debate concerns Explainable …

Mediators: Conversational agents explaining nlp model behavior

N Feldhus, AM Ravichandran, S Möller - arXiv preprint arXiv:2206.06029, 2022 - arxiv.org
The human-centric explainable artificial intelligence (HCXAI) community has raised the
need for framing the explanation process as a conversation between human and machine …

Rather a Nurse than a Physician--Contrastive Explanations under Investigation

O Eberle, I Chalkidis, L Cabello, S Brandl - arXiv preprint arXiv …, 2023 - arxiv.org
Contrastive explanations, where one decision is explained in contrast to another, are
supposed to be closer to how humans explain a decision than non-contrastive explanations …

InterroLang: Exploring NLP models and datasets through dialogue-based explanations

N Feldhus, Q Wang, T Anikina, S Chopra… - arXiv preprint arXiv …, 2023 - arxiv.org
While recently developed NLP explainability methods let us open the black box in various
ways (Madsen et al., 2022), a missing ingredient in this endeavor is an interactive tool …

XAINES: Explaining AI with narratives

M Hartmann, H Du, N Feldhus, I Kruijff-Korbayová… - KI-Künstliche …, 2022 - Springer
Artificial Intelligence (AI) systems are increasingly pervasive: Internet of Things, in-car
intelligent devices, robots, and virtual assistants, and their large-scale adoption makes it …

LLMCheckup: Conversational examination of large language models via interpretability tools

Q Wang, T Anikina, N Feldhus, J van Genabith… - arXiv preprint arXiv …, 2024 - arxiv.org
Interpretability tools that offer explanations in the form of a dialogue have demonstrated their
efficacy in enhancing users' understanding, as one-off explanations may occasionally fall …

[PDF][PDF] Walking on Eggshells: Using Analogies to Promote Appropriate Reliance in Human-AI Decision Making

G He, U Gadiraju - Proceedings of the Workshop on Trust and …, 2022 - ujwalgadiraju.com
Although AI systems have proved to be powerful in supporting decision making in critical
domains, the underlying complexity and their poor explainability pose great challenges for …