Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness?

A Jacovi, Y Goldberg - arXiv preprint arXiv:2004.03685, 2020 - arxiv.org
With the growing popularity of deep-learning based NLP models, comes a need for
interpretable systems. But what is interpretability, and what constitutes a high-quality …

[HTML][HTML] Explainable Artificial Intelligence-Based Decision Support Systems: A Recent Review

G Kostopoulos, G Davrazos, S Kotsiantis - Electronics, 2024 - mdpi.com
This survey article provides a comprehensive overview of the evolving landscape of
Explainable Artificial Intelligence (XAI) in Decision Support Systems (DSSs). As Artificial …

Explainable ai for text classification: Lessons from a comprehensive evaluation of post hoc methods

M Cesarini, L Malandri, F Pallucchini, A Seveso… - Cognitive …, 2024 - Springer
This paper addresses the notable gap in evaluating eXplainable Artificial Intelligence (XAI)
methods for text classification. While existing frameworks focus on assessing XAI in areas …

Analyzing and interpreting neural networks for NLP: A report on the first BlackboxNLP workshop

A Alishahi, G Chrupała, T Linzen - Natural Language Engineering, 2019 - cambridge.org
The Empirical Methods in Natural Language Processing (EMNLP) 2018 workshop
BlackboxNLP was dedicated to resources and techniques specifically developed for …

Global reconstruction of language models with linguistic rules–Explainable AI for online consumer reviews

M Binder, B Heinrich, M Hopf, A Schiller - Electronic Markets, 2022 - Springer
Analyzing textual data by means of AI models has been recognized as highly relevant in
information systems research and practice, since a vast amount of data on eCommerce …

Brightbox—a rough set based technology for diagnosing mistakes of machine learning models

A Janusz, A Zalewska, Ł Wawrowski, P Biczyk… - Applied Soft …, 2023 - Elsevier
The paper presents a novel approach to investigating mistakes in machine learning model
operations. The considered approach is the basis for BrightBox–a diagnostic technology that …

Towards explainable evaluation metrics for natural language generation

C Leiter, P Lertvittayakumjorn, M Fomicheva… - arXiv preprint arXiv …, 2022 - arxiv.org
Unlike classical lexical overlap metrics such as BLEU, most current evaluation metrics (such
as BERTScore or MoverScore) are based on black-box language models such as BERT or …

[图书][B] Explainable natural language processing

A Søgaard - 2021 - books.google.com
This book presents a taxonomy framework and survey of methods relevant to explaining the
decisions and analyzing the inner workings of Natural Language Processing (NLP) models …

Outlier Summarization via Human Interpretable Rules

Y Deng, Y Wang, L Cao, L Qiao, Y Wang, J Xu… - Proceedings of the …, 2024 - dl.acm.org
Outlier detection is crucial for preventing financial fraud, network intrusions, and device
failures. Users often expect systems to automatically summarize and interpret outlier …

Can metafeatures help improve explanations of prediction models when using behavioral and textual data?

Y Ramon, D Martens, T Evgeniou, S Praet - Machine Learning, 2024 - Springer
Abstract Machine learning models built on behavioral and textual data can result in highly
accurate prediction models, but are often very difficult to interpret. Linear models require …