Adversarial attacks and defenses in explainable artificial intelligence: A survey
H Baniecki, P Biecek - Information Fusion, 2024 - Elsevier
Explainable artificial intelligence (XAI) methods are portrayed as a remedy for debugging
and trusting statistical and deep learning models, as well as interpreting their predictions …
and trusting statistical and deep learning models, as well as interpreting their predictions …
What does a platypus look like? generating customized prompts for zero-shot image classification
Open-vocabulary models are a promising new paradigm for image classification. Unlike
traditional classification models, open-vocabulary models classify among any arbitrary set of …
traditional classification models, open-vocabulary models classify among any arbitrary set of …
Explainability for large language models: A survey
Large language models (LLMs) have demonstrated impressive capabilities in natural
language processing. However, their internal mechanisms are still unclear and this lack of …
language processing. However, their internal mechanisms are still unclear and this lack of …
Data-driven insight into the reductive stability of ion–solvent complexes in lithium battery electrolytes
Lithium (Li) metal batteries (LMBs) are regarded as one of the most promising energy
storage systems due to their ultrahigh theoretical energy density. However, the high …
storage systems due to their ultrahigh theoretical energy density. However, the high …
Rethinking interpretability in the era of large language models
Interpretable machine learning has exploded as an area of interest over the last decade,
sparked by the rise of increasingly large datasets and deep neural networks …
sparked by the rise of increasingly large datasets and deep neural networks …
Transitioning From Federated Learning to Quantum Federated Learning in Internet of Things: A Comprehensive Survey
Quantum Federated Learning (QFL) recently becomes a promising approach with the
potential to revolutionize Machine Learning (ML). It merges the established strengths of …
potential to revolutionize Machine Learning (ML). It merges the established strengths of …
Learning to estimate shapley values with vision transformers
Transformers have become a default architecture in computer vision, but understanding
what drives their predictions remains a challenging problem. Current explanation …
what drives their predictions remains a challenging problem. Current explanation …
[HTML][HTML] Explainability meets uncertainty quantification: Insights from feature-based model fusion on multimodal time series
Feature importance evaluation is one of the prevalent approaches to interpreting Machine
Learning (ML) models. A drawback of using these methods for high-dimensional datasets is …
Learning (ML) models. A drawback of using these methods for high-dimensional datasets is …
SHAP-IQ: Unified approximation of any-order shapley interactions
Predominately in explainable artificial intelligence (XAI) research, the Shapley value (SV) is
applied to determine feature attributions for any black box model. Shapley interaction …
applied to determine feature attributions for any black box model. Shapley interaction …
On the robustness of removal-based feature attributions
To explain predictions made by complex machine learning models, many feature attribution
methods have been developed that assign importance scores to input features. Some recent …
methods have been developed that assign importance scores to input features. Some recent …