Unsupervised anomaly detection with LSTM neural networks T Ergen, SS Kozat IEEE transactions on neural networks and learning systems 31 (8), 3127-3141, 2019 | 334 | 2019 |
Online training of LSTM networks in distributed systems for variable length data sequences T Ergen, SS Kozat IEEE transactions on neural networks and learning systems 29 (10), 5159-5165, 2017 | 110 | 2017 |
Efficient online learning algorithms based on LSTM neural networks T Ergen, SS Kozat IEEE transactions on neural networks and learning systems 29 (8), 3772-3783, 2017 | 107 | 2017 |
Neural networks are convex regularizers: Exact polynomial-time convex optimization formulations for two-layer networks M Pilanci, T Ergen International Conference on Machine Learning, 7695-7705, 2020 | 103 | 2020 |
Revealing the Structure of Deep Neural Networks via Convex Duality T Ergen, M Pilanci arXiv preprint arXiv:2002.09773, 2020 | 84* | 2020 |
Convex geometry and duality of over-parameterized neural networks T Ergen, M Pilanci Journal of machine learning research 22 (212), 1-63, 2021 | 56 | 2021 |
Implicit Convex Regularizers of CNN Architectures: Convex Optimization of Two- and Three-Layer Networks in Polynomial Time T Ergen, M Pilanci arXiv preprint arXiv:2006.14798, 2020 | 45 | 2020 |
Vector-output relu neural network problems are copositive programs: Convex analysis of two layer networks and polynomial-time algorithms A Sahiner, T Ergen, J Pauly, M Pilanci arXiv preprint arXiv:2012.13329, 2020 | 40 | 2020 |
Global optimality beyond two layers: Training deep relu networks via convex programs T Ergen, M Pilanci International Conference on Machine Learning, 2993-3003, 2021 | 35 | 2021 |
Demystifying batch normalization in relu networks: Equivalent convex optimization models and implicit regularization T Ergen, A Sahiner, B Ozturkler, J Pauly, M Mardani, M Pilanci arXiv preprint arXiv:2103.01499, 2021 | 31 | 2021 |
Convex geometry of two-layer relu networks: Implicit autoencoding and interpretable models T Ergen, M Pilanci International Conference on Artificial Intelligence and Statistics, 4024-4033, 2020 | 31 | 2020 |
Unraveling attention via convex duality: Analysis and interpretations of vision transformers A Sahiner, T Ergen, B Ozturkler, J Pauly, M Mardani, M Pilanci International Conference on Machine Learning, 19050-19088, 2022 | 27 | 2022 |
Energy-efficient LSTM networks for online learning T Ergen, AH Mirza, SS Kozat IEEE transactions on neural networks and learning systems 31 (8), 3114-3126, 2019 | 23 | 2019 |
Hidden convexity of wasserstein GANs: Interpretable generative models with closed-form solutions A Sahiner, T Ergen, B Ozturkler, B Bartan, J Pauly, M Mardani, M Pilanci arXiv preprint arXiv:2107.05680, 2021 | 20 | 2021 |
Convex optimization for shallow neural networks T Ergen, M Pilanci 2019 57th Annual Allerton Conference on Communication, Control, and …, 2019 | 18 | 2019 |
Path regularization: A convexity and sparsity inducing regularization for parallel relu networks T Ergen, M Pilanci Advances in Neural Information Processing Systems 36, 2024 | 16 | 2024 |
Convex neural autoregressive models: Towards tractable, expressive, and theoretically-backed models for sequential forecasting and generation V Gupta, B Bartan, T Ergen, M Pilanci ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021 | 14* | 2021 |
Parallel deep neural networks have zero duality gap Y Wang, T Ergen, M Pilanci arXiv preprint arXiv:2110.06482, 2021 | 11 | 2021 |
A novel distributed anomaly detection algorithm based on support vector machines T Ergen, SS Kozat Digital Signal Processing 99, 102657, 2020 | 11 | 2020 |
Globally optimal training of neural networks with threshold activation functions T Ergen, HI Gulluk, J Lacotte, M Pilanci arXiv preprint arXiv:2303.03382, 2023 | 10 | 2023 |