Learning with fenchel-young losses
Over the past decades, numerous loss functions have been been proposed for a variety of
supervised learning tasks, including regression, classification, ranking, and more generally …
supervised learning tasks, including regression, classification, ranking, and more generally …
Loss factorization, weakly supervised learning and label noise robustness
We prove that the empirical risk of most well-known loss functions factors into a linear term
aggregating all labels with a term that is label free, and can further be expressed by sums of …
aggregating all labels with a term that is label free, and can further be expressed by sums of …
RBoost: Label noise-robust boosting algorithm based on a nonconvex loss function and the numerically stable base learners
AdaBoost has attracted much attention in the machine learning community because of its
excellent performance in combining weak classifiers into strong classifiers. However …
excellent performance in combining weak classifiers into strong classifiers. However …
Learning classifiers with fenchel-young losses: Generalized entropies, margins, and algorithms
Abstract This paper studies Fenchel-Young losses, a generic way to construct convex loss
functions from a regularization function. We analyze their properties in depth, showing that …
functions from a regularization function. We analyze their properties in depth, showing that …
Two-temperature logistic regression based on the tsallis divergence
E Amid, MK Warmuth… - The 22nd International …, 2019 - proceedings.mlr.press
We develop a variant of multiclass logistic regression that is significantly more robust to
noise. The algorithm has one weight vector per class and the surrogate loss is a function of …
noise. The algorithm has one weight vector per class and the surrogate loss is a function of …
Bias-variance decompositions for margin losses
We introduce a novel bias-variance decomposition for a range of strictly convex margin
losses, including the logistic loss (minimized by the classic LogitBoost algorithm) as well as …
losses, including the logistic loss (minimized by the classic LogitBoost algorithm) as well as …
Bayesian Inference
E Souza de Cursi - Uncertainty Quantification with R: Bayesian Methods, 2024 - Springer
This chapter presents the Bayesian approach for practical tasks, such as estimation,
hypothesis testing, model or variable selection, and regression. The choice of priors is …
hypothesis testing, model or variable selection, and regression. The choice of priors is …
Uncertainty Quantification with R
ES de Cursi - International Series in Operations Research and …, 2024 - Springer
This book is an independent companion volume to Uncertainty Quantification with R,
complementing certain of its topics and taking up others from a different angle–the Bayesian …
complementing certain of its topics and taking up others from a different angle–the Bayesian …
Sequential Bayesian Estimation
E Souza de Cursi - Uncertainty Quantification with R: Bayesian Methods, 2024 - Springer
Abstract This chapter presents Monte-Carlo Markov Chain methods and connected topics,
namely Importance Sampling, Metropolis-Hastings Algorithm, Kalman Filtering, Particle …
namely Importance Sampling, Metropolis-Hastings Algorithm, Kalman Filtering, Particle …
A motion classification model with improved robustness through deformation code integration
L Xia, J Lv, D Liu - Neural Computing and Applications, 2019 - Springer
During data acquisition, samples in a time series may contain noise, such as inconsistent
data ranges, inconsistent data, and incomplete data. Therefore, the classification model …
data ranges, inconsistent data, and incomplete data. Therefore, the classification model …