Piecewise linear neural networks and deep learning

Q Tao, L Li, X Huang, X Xi, S Wang… - Nature Reviews Methods …, 2022 - nature.com
As a powerful modelling method, piecewise linear neural networks (PWLNNs) have proven
successful in various fields, most recently in deep learning. To apply PWLNN methods, both …

When deep learning meets polyhedral theory: A survey

J Huchette, G Muñoz, T Serra, C Tsay - arXiv preprint arXiv:2305.00241, 2023 - arxiv.org
In the past decade, deep learning became the prevalent methodology for predictive
modeling thanks to the remarkable accuracy of deep neural networks in tasks such as …

Unraveling attention via convex duality: Analysis and interpretations of vision transformers

A Sahiner, T Ergen, B Ozturkler… - International …, 2022 - proceedings.mlr.press
Vision transformers using self-attention or its proposed alternatives have demonstrated
promising results in many image related tasks. However, the underpinning inductive bias of …

Fast convex optimization for two-layer relu networks: Equivalent model classes and cone decompositions

A Mishkin, A Sahiner, M Pilanci - … Conference on Machine …, 2022 - proceedings.mlr.press
We develop fast algorithms and robust software for convex optimization of two-layer neural
networks with ReLU activation functions. Our work leverages a convex re-formulation of the …

Synchronization-enhanced deep learning early flood risk predictions: The core of data-driven city digital twins for climate resilience planning

M Ghaith, A Yosri, W El-Dakhakhni - Water, 2022 - mdpi.com
Floods have been among the costliest hydrometeorological hazards across the globe for
decades, and are expected to become even more frequent and cause larger devastating …

Vector-output relu neural network problems are copositive programs: Convex analysis of two layer networks and polynomial-time algorithms

A Sahiner, T Ergen, J Pauly, M Pilanci - arXiv preprint arXiv:2012.13329, 2020 - arxiv.org
We describe the convex semi-infinite dual of the two-layer vector-output ReLU neural
network training problem. This semi-infinite dual admits a finite dimensional representation …

Demystifying batch normalization in relu networks: Equivalent convex optimization models and implicit regularization

T Ergen, A Sahiner, B Ozturkler, J Pauly… - arXiv preprint arXiv …, 2021 - arxiv.org
Batch Normalization (BN) is a commonly used technique to accelerate and stabilize training
of deep neural networks. Despite its empirical success, a full theoretical understanding of …

Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?

K Zhang, YX Wang - arXiv preprint arXiv:2204.09664, 2022 - arxiv.org
We study the theory of neural network (NN) from the lens of classical nonparametric
regression problems with a focus on NN's ability to adaptively estimate functions with …

Efficient global optimization of two-layer relu networks: Quadratic-time algorithms and adversarial training

Y Bai, T Gautam, S Sojoudi - SIAM Journal on Mathematics of Data Science, 2023 - SIAM
The nonconvexity of the artificial neural network (ANN) training landscape brings
optimization difficulties. While the traditional back-propagation stochastic gradient descent …

Optimal sets and solution paths of ReLU networks

A Mishkin, M Pilanci - International Conference on Machine …, 2023 - proceedings.mlr.press
We develop an analytical framework to characterize the set of optimal ReLU neural networks
by reformulating the non-convex training problem as a convex program. We show that the …