Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models
Advances in neural information processing systems, 2020•proceedings.neurips.cc
Shapley values underlie one of the most popular model-agnostic methods within
explainable artificial intelligence. These values are designed to attribute the difference
between a model's prediction and an average baseline to the different features used as
input to the model. Being based on solid game-theoretic principles, Shapley values uniquely
satisfy several desirable properties, which is why they are increasingly used to explain the
predictions of possibly complex and highly non-linear machine learning models. Shapley …
explainable artificial intelligence. These values are designed to attribute the difference
between a model's prediction and an average baseline to the different features used as
input to the model. Being based on solid game-theoretic principles, Shapley values uniquely
satisfy several desirable properties, which is why they are increasingly used to explain the
predictions of possibly complex and highly non-linear machine learning models. Shapley …
Abstract
Shapley values underlie one of the most popular model-agnostic methods within explainable artificial intelligence. These values are designed to attribute the difference between a model's prediction and an average baseline to the different features used as input to the model. Being based on solid game-theoretic principles, Shapley values uniquely satisfy several desirable properties, which is why they are increasingly used to explain the predictions of possibly complex and highly non-linear machine learning models. Shapley values are well calibrated to a user’s intuition when features are independent, but may lead to undesirable, counterintuitive explanations when the independence assumption is violated.
proceedings.neurips.cc
以上显示的是最相近的搜索结果。 查看全部搜索结果