Uncovering expression signatures of synergistic drug responses via ensembles of explainable machine-learning models
Abstract Machine learning may aid the choice of optimal combinations of anticancer drugs
by explaining the molecular basis of their synergy. By combining accurate models with …
by explaining the molecular basis of their synergy. By combining accurate models with …
Faith-shap: The faithful shapley interaction index
Shapley values, which were originally designed to assign attributions to individual players in
coalition games, have become a commonly used approach in explainable machine learning …
coalition games, have become a commonly used approach in explainable machine learning …
Counterfactual shapley additive explanations
Feature attributions are a common paradigm for model explanations due to their simplicity in
assigning a single numeric score for each input feature to a model. In the actionable …
assigning a single numeric score for each input feature to a model. In the actionable …
[HTML][HTML] Relating the partial dependence plot and permutation feature importance to the data generating process
Scientists and practitioners increasingly rely on machine learning to model data and draw
conclusions. Compared to statistical modeling approaches, machine learning makes fewer …
conclusions. Compared to statistical modeling approaches, machine learning makes fewer …
Synthetic benchmarks for scientific research in explainable machine learning
As machine learning models grow more complex and their applications become more high-
stakes, tools for explaining model predictions have become increasingly important. This has …
stakes, tools for explaining model predictions have become increasingly important. This has …
[HTML][HTML] Revealing drivers and risks for power grid frequency stability with explainable AI
Stable operation of an electric power system requires strict operational limits for the grid
frequency. Fluctuations and external impacts can cause large frequency deviations and …
frequency. Fluctuations and external impacts can cause large frequency deviations and …
[HTML][HTML] Model-agnostic feature importance and effects with dependent features: a conditional subgroup approach
The interpretation of feature importance in machine learning models is challenging when
features are dependent. Permutation feature importance (PFI) ignores such dependencies …
features are dependent. Permutation feature importance (PFI) ignores such dependencies …
From shapley values to generalized additive models and back
S Bordt, U von Luxburg - International Conference on …, 2023 - proceedings.mlr.press
In explainable machine learning, local post-hoc explanation algorithms and inherently
interpretable models are often seen as competing approaches. This work offers a partial …
interpretable models are often seen as competing approaches. This work offers a partial …
[HTML][HTML] Understanding electricity prices beyond the merit order principle using explainable AI
Electricity prices in liberalized markets are determined by the supply and demand for electric
power, which are in turn driven by various external influences that vary strongly in time. In …
power, which are in turn driven by various external influences that vary strongly in time. In …
Fast treeshap: Accelerating shap value computation for trees
J Yang - arXiv preprint arXiv:2109.09847, 2021 - arxiv.org
SHAP (SHapley Additive exPlanation) values are one of the leading tools for interpreting
machine learning models, with strong theoretical guarantees (consistency, local accuracy) …
machine learning models, with strong theoretical guarantees (consistency, local accuracy) …