A path to simpler models starts with noise

L Semenova, H Chen, R Parr… - Advances in neural …, 2024 - proceedings.neurips.cc
The Rashomon set is the set of models that perform approximately equally well on a given
dataset, and the Rashomon ratio is the fraction of all models in a given hypothesis space …

Beyond Average Performance--exploring regions of deviating performance for black box classification models

L Torgo, P Azevedo, I Areosa - arXiv preprint arXiv:2109.08216, 2021 - arxiv.org
Machine learning models are becoming increasingly popular in different types of settings.
This is mainly caused by their ability to achieve a level of predictive performance that is hard …

On the existence of simpler machine learning models

L Semenova, C Rudin, R Parr - … of the 2022 ACM Conference on …, 2022 - dl.acm.org
It is almost always easier to find an accurate-but-complex model than an accurate-yet-simple
model. Finding optimal, sparse, accurate models of various forms (linear models with integer …

Exploring the whole rashomon set of sparse decision trees

R Xin, C Zhong, Z Chen, T Takagi… - Advances in neural …, 2022 - proceedings.neurips.cc
In any given machine learning problem, there may be many models that could explain the
data almost equally well. However, most learning algorithms return only one of these …

Position: Amazing Things Come From Having Many Good Models

C Rudin, C Zhong, L Semenova, M Seltzer… - Forty-first International … - openreview.net
The* Rashomon Effect*, coined by Leo Breiman, describes the phenomenon that there exist
many equally good predictive models for the same dataset. This phenomenon happens for …

Exploration of Rashomon set assists explanations for medical data

K Kobylińska, M Krzyziński, R Machowicz… - arXiv preprint arXiv …, 2023 - arxiv.org
The machine learning modeling process conventionally culminates in selecting a single
model that maximizes a selected performance metric. However, this approach leads to …

Enhancing simple models by exploiting what they already know

A Dhurandhar, K Shanmugam… - … Conference on Machine …, 2020 - proceedings.mlr.press
There has been recent interest in improving performance of simple models for multiple
reasons such as interpretability, robust learning from small data, deployment in memory …

xgems: Generating examplars to explain black-box models

S Joshi, O Koyejo, B Kim, J Ghosh - arXiv preprint arXiv:1806.08867, 2018 - arxiv.org
This work proposes xGEMs or manifold guided exemplars, a framework to understand black-
box classifier behavior by exploring the landscape of the underlying data manifold as data …

Neural Networks Are Implicit Decision Trees: The Hierarchical Simplicity Bias

Z Du - arXiv preprint arXiv:2311.02622, 2023 - arxiv.org
Neural networks exhibit simplicity bias; they rely on simpler features while ignoring equally
predictive but more complex features. In this work, we introduce a novel approach termed …

NLS: an accurate and yet easy-to-interpret regression method

V Coscrato, MHA Inácio, T Botari, R Izbicki - arXiv preprint arXiv …, 2019 - arxiv.org
An important feature of successful supervised machine learning applications is to be able to
explain the predictions given by the regression or classification model being used. However …