Increasing transparency in machine learning through bootstrap simulation and shapely additive explanations
AA Huang, SY Huang - PLoS One, 2023 - journals.plos.org
Machine learning methods are widely used within the medical field. However, the reliability
and efficacy of these models is difficult to assess, making it difficult for researchers to identify …
and efficacy of these models is difficult to assess, making it difficult for researchers to identify …
Generalized SHAP: Generating multiple types of explanations in machine learning
Many important questions about a model cannot be answered just by explaining how much
each feature contributes to its output. To answer a broader set of questions, we generalize a …
each feature contributes to its output. To answer a broader set of questions, we generalize a …
Investigating the impact of calibration on the quality of explanations
Predictive models used in Decision Support Systems (DSS) are often requested to explain
the reasoning to users. Explanations of instances consist of two parts; the predicted label …
the reasoning to users. Explanations of instances consist of two parts; the predicted label …
Computation of the distribution of model accuracy statistics in machine learning: comparison between analytically derived distributions and simulation‐based methods
AA Huang, SY Huang - Health science reports, 2023 - Wiley Online Library
Abstract Background and Aims All fields have seen an increase in machine‐learning
techniques. To accurately evaluate the efficacy of novel modeling methods, it is necessary to …
techniques. To accurately evaluate the efficacy of novel modeling methods, it is necessary to …
Individual explanations in machine learning models: A survey for practitioners
A Carrillo, LF Cantú, A Noriega - arXiv preprint arXiv:2104.04144, 2021 - arxiv.org
In recent years, the use of sophisticated statistical models that influence decisions in
domains of high societal relevance is on the rise. Although these models can often bring …
domains of high societal relevance is on the rise. Although these models can often bring …
Can local explanation techniques explain linear additive models?
Local model-agnostic additive explanation techniques decompose the predicted output of a
black-box model into additive feature importance scores. Questions have been raised about …
black-box model into additive feature importance scores. Questions have been raised about …
Measurable counterfactual local explanations for any classifier
A White, A d'Avila Garcez - ECAI 2020, 2020 - ebooks.iospress.nl
We propose a novel method for explaining the predictions of any classifier. In our approach,
local explanations are expected to explain both the outcome of a prediction and how that …
local explanations are expected to explain both the outcome of a prediction and how that …
Evaluating and aggregating feature-based model explanations
A feature-based model explanation denotes how much each input feature contributes to a
model's output for a given data point. As the number of proposed explanation functions …
model's output for a given data point. As the number of proposed explanation functions …
An empirical study of the effect of background data size on the stability of SHapley Additive exPlanations (SHAP) for deep learning models
Nowadays, the interpretation of why a machine learning (ML) model makes certain
inferences is as crucial as the accuracy of such inferences. Some ML models like the …
inferences is as crucial as the accuracy of such inferences. Some ML models like the …
Shapely additive values can effectively visualize pertinent covariates in machine learning when predicting hypertension
AA Huang, SY Huang - The Journal of Clinical Hypertension, 2023 - Wiley Online Library
Abstract Machine learning methods are widely used within the medical field to enhance
prediction. However, little is known about the reliability and efficacy of these models to …
prediction. However, little is known about the reliability and efficacy of these models to …