A path to simpler models starts with noise
L Semenova, H Chen, R Parr… - Advances in neural …, 2024 - proceedings.neurips.cc
The Rashomon set is the set of models that perform approximately equally well on a given
dataset, and the Rashomon ratio is the fraction of all models in a given hypothesis space …
dataset, and the Rashomon ratio is the fraction of all models in a given hypothesis space …
Beyond Average Performance--exploring regions of deviating performance for black box classification models
Machine learning models are becoming increasingly popular in different types of settings.
This is mainly caused by their ability to achieve a level of predictive performance that is hard …
This is mainly caused by their ability to achieve a level of predictive performance that is hard …
On the existence of simpler machine learning models
It is almost always easier to find an accurate-but-complex model than an accurate-yet-simple
model. Finding optimal, sparse, accurate models of various forms (linear models with integer …
model. Finding optimal, sparse, accurate models of various forms (linear models with integer …
Exploring the whole rashomon set of sparse decision trees
In any given machine learning problem, there may be many models that could explain the
data almost equally well. However, most learning algorithms return only one of these …
data almost equally well. However, most learning algorithms return only one of these …
Position: Amazing Things Come From Having Many Good Models
The* Rashomon Effect*, coined by Leo Breiman, describes the phenomenon that there exist
many equally good predictive models for the same dataset. This phenomenon happens for …
many equally good predictive models for the same dataset. This phenomenon happens for …
Exploration of Rashomon set assists explanations for medical data
K Kobylińska, M Krzyziński, R Machowicz… - arXiv preprint arXiv …, 2023 - arxiv.org
The machine learning modeling process conventionally culminates in selecting a single
model that maximizes a selected performance metric. However, this approach leads to …
model that maximizes a selected performance metric. However, this approach leads to …
Enhancing simple models by exploiting what they already know
A Dhurandhar, K Shanmugam… - … Conference on Machine …, 2020 - proceedings.mlr.press
There has been recent interest in improving performance of simple models for multiple
reasons such as interpretability, robust learning from small data, deployment in memory …
reasons such as interpretability, robust learning from small data, deployment in memory …
xgems: Generating examplars to explain black-box models
This work proposes xGEMs or manifold guided exemplars, a framework to understand black-
box classifier behavior by exploring the landscape of the underlying data manifold as data …
box classifier behavior by exploring the landscape of the underlying data manifold as data …
Neural Networks Are Implicit Decision Trees: The Hierarchical Simplicity Bias
Z Du - arXiv preprint arXiv:2311.02622, 2023 - arxiv.org
Neural networks exhibit simplicity bias; they rely on simpler features while ignoring equally
predictive but more complex features. In this work, we introduce a novel approach termed …
predictive but more complex features. In this work, we introduce a novel approach termed …
NLS: an accurate and yet easy-to-interpret regression method
An important feature of successful supervised machine learning applications is to be able to
explain the predictions given by the regression or classification model being used. However …
explain the predictions given by the regression or classification model being used. However …