Machine learning–accelerated computational fluid dynamics

D Kochkov, JA Smith, A Alieva… - Proceedings of the …, 2021 - National Acad Sciences
Numerical simulation of fluids plays an essential role in modeling many physical
phenomena, such as weather, climate, aerodynamics, and plasma physics. Fluids are well …

Heavy ball neural ordinary differential equations

H Xia, V Suliafu, H Ji, T Nguyen… - Advances in …, 2021 - proceedings.neurips.cc
We propose heavy ball neural ordinary differential equations (HBNODEs), leveraging the
continuous limit of the classical momentum accelerated gradient descent, to improve neural …

Lipschitz recurrent neural networks

NB Erichson, O Azencot, A Queiruga… - arXiv preprint arXiv …, 2020 - arxiv.org
Viewing recurrent neural networks (RNNs) as continuous-time dynamical systems, we
propose a recurrent unit that describes the hidden state's evolution with two parts: a well …

Pyramid convolutional RNN for MRI image reconstruction

EZ Chen, P Wang, X Chen, T Chen… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Fast and accurate MRI image reconstruction from undersampled data is crucial in clinical
practice. Deep learning based reconstruction methods have shown promising advances in …

Implicit graph neural networks: a monotone operator viewpoint

J Baker, Q Wang, CD Hauck… - … Conference on Machine …, 2023 - proceedings.mlr.press
Implicit graph neural networks (IGNNs)–that solve a fixed-point equilibrium equation using
Picard iteration for representation learning–have shown remarkable performance in learning …

AdamR-GRUs: Adaptive momentum-based Regularized GRU for HMER problems

A Pal, KP Singh - Applied Soft Computing, 2023 - Elsevier
Abstract Handwritten Mathematical Expression Recognition (HMER) is essential to online
education and scientific research. However, discerning the length and characters of …

Improving neural ordinary differential equations with nesterov's accelerated gradient method

HHN Nguyen, T Nguyen, H Vo… - Advances in Neural …, 2022 - proceedings.neurips.cc
We propose the Nesterov neural ordinary differential equations (NesterovNODEs), whose
layers solve the second-order ordinary differential equations (ODEs) limit of Nesterov's …

An automatic learning rate decay strategy for stochastic gradient descent optimization methods in neural networks

K Wang, Y Dou, T Sun, P Qiao… - International Journal of …, 2022 - Wiley Online Library
Abstract Stochastic Gradient Descent (SGD) series optimization methods play the vital role
in training neural networks, attracting growing attention in science and engineering fields of …

Improving deep neural networks' training for image classification with nonlinear conjugate gradient-style adaptive momentum

B Wang, Q Ye - IEEE Transactions on Neural Networks and …, 2023 - ieeexplore.ieee.org
Momentum is crucial in stochastic gradient-based optimization algorithms for accelerating or
improving training deep neural networks (DNNs). In deep learning practice, the momentum …

[HTML][HTML] Decentralized concurrent learning with coordinated momentum and restart

DE Ochoa, MU Javed, X Chen, JI Poveda - Systems & Control Letters, 2024 - Elsevier
This paper studies the stability and convergence properties of a class of multi-agent
concurrent learning (CL) algorithms with momentum and restart. Such algorithms can be …