Explainability in supply chain operational risk management: A systematic literature review

SF Nimmy, OK Hussain, RK Chakrabortty… - Knowledge-Based …, 2022 - Elsevier
It is important to manage operational disruptions to ensure the success of supply chain
operations. To achieve this aim, researchers have developed techniques that determine the …

AI fairness in data management and analytics: A review on challenges, methodologies and applications

P Chen, L Wu, L Wang - Applied Sciences, 2023 - mdpi.com
This article provides a comprehensive overview of the fairness issues in artificial intelligence
(AI) systems, delving into its background, definition, and development process. The article …

Causal machine learning: A survey and open problems

J Kaddour, A Lynch, Q Liu, MJ Kusner… - arXiv preprint arXiv …, 2022 - arxiv.org
Causal Machine Learning (CausalML) is an umbrella term for machine learning methods
that formalize the data-generation process as a structural causal model (SCM). This …

Towards faithful model explanation in nlp: A survey

Q Lyu, M Apidianaki, C Callison-Burch - Computational Linguistics, 2024 - direct.mit.edu
End-to-end neural Natural Language Processing (NLP) models are notoriously difficult to
understand. This has given rise to numerous efforts towards model explainability in recent …

Explaining NLP models via minimal contrastive editing (MiCE)

A Ross, A Marasović, ME Peters - arXiv preprint arXiv:2012.13985, 2020 - arxiv.org
Humans have been shown to give contrastive explanations, which explain why an observed
event happened rather than some other counterfactual event (the contrast case). Despite the …

Do models explain themselves? counterfactual simulatability of natural language explanations

Y Chen, R Zhong, N Ri, C Zhao, H He… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) are trained to imitate humans to explain human decisions.
However, do LLMs explain themselves? Can they help humans build mental models of how …

Faithfulness tests for natural language explanations

P Atanasova, OM Camburu, C Lioma… - arXiv preprint arXiv …, 2023 - arxiv.org
Explanations of neural models aim to reveal a model's decision-making process for its
predictions. However, recent work shows that current methods giving explanations such as …

Identifying and mitigating spurious correlations for improving robustness in nlp models

T Wang, R Sridhar, D Yang, X Wang - arXiv preprint arXiv:2110.07736, 2021 - arxiv.org
Recently, NLP models have achieved remarkable progress across a variety of tasks;
however, they have also been criticized for being not robust. Many robustness problems can …

Interpreting language models with contrastive explanations

K Yin, G Neubig - arXiv preprint arXiv:2202.10419, 2022 - arxiv.org
Model interpretability methods are often used to explain NLP model decisions on tasks such
as text classification, where the output space is relatively small. However, when applied to …

Contrastive data and learning for natural language processing

R Zhang, Y Ji, Y Zhang… - Proceedings of the 2022 …, 2022 - aclanthology.org
Current NLP models heavily rely on effective representation learning algorithms. Contrastive
learning is one such technique to learn an embedding space such that similar data sample …