Uncovering expression signatures of synergistic drug responses via ensembles of explainable machine-learning models

JD Janizek, AB Dincer, S Celik, H Chen… - Nature biomedical …, 2023 - nature.com
Abstract Machine learning may aid the choice of optimal combinations of anticancer drugs
by explaining the molecular basis of their synergy. By combining accurate models with …

Faith-shap: The faithful shapley interaction index

CP Tsai, CK Yeh, P Ravikumar - Journal of Machine Learning Research, 2023 - jmlr.org
Shapley values, which were originally designed to assign attributions to individual players in
coalition games, have become a commonly used approach in explainable machine learning …

Counterfactual shapley additive explanations

E Albini, J Long, D Dervovic, D Magazzeni - Proceedings of the 2022 …, 2022 - dl.acm.org
Feature attributions are a common paradigm for model explanations due to their simplicity in
assigning a single numeric score for each input feature to a model. In the actionable …

[HTML][HTML] Relating the partial dependence plot and permutation feature importance to the data generating process

C Molnar, T Freiesleben, G König, J Herbinger… - World Conference on …, 2023 - Springer
Scientists and practitioners increasingly rely on machine learning to model data and draw
conclusions. Compared to statistical modeling approaches, machine learning makes fewer …

Synthetic benchmarks for scientific research in explainable machine learning

Y Liu, S Khandagale, C White… - arXiv preprint arXiv …, 2021 - arxiv.org
As machine learning models grow more complex and their applications become more high-
stakes, tools for explaining model predictions have become increasingly important. This has …

[HTML][HTML] Revealing drivers and risks for power grid frequency stability with explainable AI

J Kruse, B Schäfer, D Witthaut - Patterns, 2021 - cell.com
Stable operation of an electric power system requires strict operational limits for the grid
frequency. Fluctuations and external impacts can cause large frequency deviations and …

[HTML][HTML] Model-agnostic feature importance and effects with dependent features: a conditional subgroup approach

C Molnar, G König, B Bischl, G Casalicchio - Data Mining and Knowledge …, 2023 - Springer
The interpretation of feature importance in machine learning models is challenging when
features are dependent. Permutation feature importance (PFI) ignores such dependencies …

From shapley values to generalized additive models and back

S Bordt, U von Luxburg - International Conference on …, 2023 - proceedings.mlr.press
In explainable machine learning, local post-hoc explanation algorithms and inherently
interpretable models are often seen as competing approaches. This work offers a partial …

[HTML][HTML] Understanding electricity prices beyond the merit order principle using explainable AI

J Trebbien, LR Gorjão, A Praktiknjo, B Schäfer… - Energy and AI, 2023 - Elsevier
Electricity prices in liberalized markets are determined by the supply and demand for electric
power, which are in turn driven by various external influences that vary strongly in time. In …

Fast treeshap: Accelerating shap value computation for trees

J Yang - arXiv preprint arXiv:2109.09847, 2021 - arxiv.org
SHAP (SHapley Additive exPlanation) values are one of the leading tools for interpreting
machine learning models, with strong theoretical guarantees (consistency, local accuracy) …