General pitfalls of model-agnostic interpretation methods for machine learning models C Molnar, G König, J Herbinger, T Freiesleben, S Dandl, CA Scholbeck, ... Lecture Notes in Computer Science, 2022 | 223* | 2022 |
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples T Freiesleben Minds and Machines, 2021 | 79* | 2021 |
Relating the partial dependence plot and permutation feature importance to the data generating process T Freiesleben*, C Molnar*, G König*, J Herbinger, T Reisinger, ... World Conference on Explainable Artificial Intelligence, 456-479, 2023 | 62 | 2023 |
Beyond generalization: a theory of robustness in machine learning T Freiesleben, T Grote Synthese, 2023 | 22 | 2023 |
A causal perspective on meaningful and robust algorithmic recourse G König, T Freiesleben, M Grosse-Wentrup ICML Workshop, 2021 | 22 | 2021 |
Improvement-focused causal recourse (ICR) G König, T Freiesleben, M Grosse-Wentrup AAAI Conference on Artificial Intelligence, 2023 | 18 | 2023 |
Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena T Freiesleben, G König, C Molnar, Á Tejero-Cantero Minds and Machines, 2024 | 17 | 2024 |
Dear XAI Community, We Need to Talk! Fundamental Misconceptions in Current XAI Research T Freiesleben, G König World XAI Conference, 2023 | 15 | 2023 |
Artificial Neural Nets and the Representation of Human Concepts T Freiesleben arXiv preprint arXiv:2312.05337, 2023 | 1 | 2023 |
CountARFactuals--Generating plausible model-agnostic counterfactual explanations with adversarial random forests T Freiesleben*, S Dandl*, K Blesch*, G König*, J Kapar, B Bischl, ... World XAI Conference, 2024 | | 2024 |
Supervised Machine Learning for Science: How to stop worrying and love your black box T Freiesleben*, C Molnar* https://ml-science-book.com/, 2024 | | 2024 |
What Does Explainable AI Explain? T Freiesleben Disertation LMU Munich, 2023 | | 2023 |