Adversarial attacks and defenses in explainable artificial intelligence: A survey

H Baniecki, P Biecek - Information Fusion, 2024 - Elsevier
Explainable artificial intelligence (XAI) methods are portrayed as a remedy for debugging
and trusting statistical and deep learning models, as well as interpreting their predictions …

What does a platypus look like? generating customized prompts for zero-shot image classification

S Pratt, I Covert, R Liu… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Open-vocabulary models are a promising new paradigm for image classification. Unlike
traditional classification models, open-vocabulary models classify among any arbitrary set of …

Explainability for large language models: A survey

H Zhao, H Chen, F Yang, N Liu, H Deng, H Cai… - ACM Transactions on …, 2024 - dl.acm.org
Large language models (LLMs) have demonstrated impressive capabilities in natural
language processing. However, their internal mechanisms are still unclear and this lack of …

Data-driven insight into the reductive stability of ion–solvent complexes in lithium battery electrolytes

YC Gao, N Yao, X Chen, L Yu, R Zhang… - Journal of the …, 2023 - ACS Publications
Lithium (Li) metal batteries (LMBs) are regarded as one of the most promising energy
storage systems due to their ultrahigh theoretical energy density. However, the high …

Rethinking interpretability in the era of large language models

C Singh, JP Inala, M Galley, R Caruana… - arXiv preprint arXiv …, 2024 - arxiv.org
Interpretable machine learning has exploded as an area of interest over the last decade,
sparked by the rise of increasingly large datasets and deep neural networks …

Transitioning From Federated Learning to Quantum Federated Learning in Internet of Things: A Comprehensive Survey

C Qiao, M Li, Y Liu, Z Tian - IEEE Communications Surveys & …, 2024 - ieeexplore.ieee.org
Quantum Federated Learning (QFL) recently becomes a promising approach with the
potential to revolutionize Machine Learning (ML). It merges the established strengths of …

Learning to estimate shapley values with vision transformers

I Covert, C Kim, SI Lee - arXiv preprint arXiv:2206.05282, 2022 - arxiv.org
Transformers have become a default architecture in computer vision, but understanding
what drives their predictions remains a challenging problem. Current explanation …

[HTML][HTML] Explainability meets uncertainty quantification: Insights from feature-based model fusion on multimodal time series

D Folgado, M Barandas, L Famiglini, R Santos… - Information …, 2023 - Elsevier
Feature importance evaluation is one of the prevalent approaches to interpreting Machine
Learning (ML) models. A drawback of using these methods for high-dimensional datasets is …

SHAP-IQ: Unified approximation of any-order shapley interactions

F Fumagalli, M Muschalik, P Kolpaczki… - Advances in …, 2024 - proceedings.neurips.cc
Predominately in explainable artificial intelligence (XAI) research, the Shapley value (SV) is
applied to determine feature attributions for any black box model. Shapley interaction …

On the robustness of removal-based feature attributions

C Lin, I Covert, SI Lee - Advances in Neural Information …, 2024 - proceedings.neurips.cc
To explain predictions made by complex machine learning models, many feature attribution
methods have been developed that assign importance scores to input features. Some recent …