Machine learning interpretability: A survey on methods and metrics

DV Carvalho, EM Pereira, JS Cardoso - Electronics, 2019 - mdpi.com
Machine learning systems are becoming increasingly ubiquitous. These systems's adoption
has been expanding, accelerating the shift towards a more algorithmic society, meaning that …

Does the whole exceed its parts? the effect of ai explanations on complementary team performance

G Bansal, T Wu, J Zhou, R Fok, B Nushi… - Proceedings of the …, 2021 - dl.acm.org
Many researchers motivate explainable AI with studies showing that human-AI team
performance on decision-making tasks improves when the AI explains its recommendations …

Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)

A Adadi, M Berrada - IEEE access, 2018 - ieeexplore.ieee.org
At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread
adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the …

Is there a trade-off between fairness and accuracy? a perspective using mismatched hypothesis testing

S Dutta, D Wei, H Yueksel, PY Chen… - International …, 2020 - proceedings.mlr.press
A trade-off between accuracy and fairness is almost taken as a given in the existing literature
on fairness in machine learning. Yet, it is not preordained that accuracy should decrease …

Is Our Continual Learner Reliable? Investigating Its Decision Attribution Stability through SHAP Value Consistency

Y Cai, S Ling, L Zhang, L Pan… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
In this work we identify continual learning (CL) methods' inherent differences in sequential
decision attribution. In the sequential learning process inconsistent decision attribution may …

Globally optimal score-based learning of directed acyclic graphs in high-dimensions

B Aragam, A Amini, Q Zhou - Advances in Neural …, 2019 - proceedings.neurips.cc
We prove that $\Omega (s\log p) $ samples suffice to learn a sparse Gaussian directed
acyclic graph (DAG) from data, where $ s $ is the maximum Markov blanket size. This …

A study on labeling network hostile behavior with intelligent interactive tools

JL Guerra, E Veas, CA Catania - 2019 IEEE Symposium on …, 2019 - ieeexplore.ieee.org
Labeling a real network dataset is specially expensive in computer security, as an expert
has to ponder several factors before assigning each label. This paper describes an …

On mismatched detection and safe, trustworthy machine learning

KR Varshney - 2020 54th Annual Conference on Information …, 2020 - ieeexplore.ieee.org
Instilling trust in high-stakes applications of machine learning is becoming essential. Trust
may be decomposed into four dimensions: basic accuracy, reliability, human interaction, and …

Enhancing simple models by exploiting what they already know

A Dhurandhar, K Shanmugam… - … Conference on Machine …, 2020 - proceedings.mlr.press
There has been recent interest in improving performance of simple models for multiple
reasons such as interpretability, robust learning from small data, deployment in memory …

Interpretable deep learning for monitoring combustion instability

T Gangopadhyay, SY Tan, A LoCurto, JB Michael… - IFAC-PapersOnLine, 2020 - Elsevier
Transitions from stable to unstable states occurring in dynamical systems can be sudden
leading to catastrophic failure and huge revenue loss. For detecting these transitions during …