Simba: System identification methods leveraging backpropagation
This manuscript details and extends the system identification methods leveraging the
backpropagation (SIMBa) toolbox presented in previous work, which uses well-established …
backpropagation (SIMBa) toolbox presented in previous work, which uses well-established …
Unconstrained learning of networked nonlinear systems via free parametrization of stable interconnected operators
This paper characterizes a new parametrization of nonlinear networked incrementally L_2-
bounded operators in discrete time. The distinctive novelty is that our parametrization is free …
bounded operators in discrete time. The distinctive novelty is that our parametrization is free …
Neural Port-Hamiltonian Models for Nonlinear Distributed Control: An Unconstrained Parametrization Approach
M Zakwan, G Ferrari-Trecate - arXiv preprint arXiv:2411.10096, 2024 - arxiv.org
The control of large-scale cyber-physical systems requires optimal distributed policies
relying solely on limited communication with neighboring agents. However, computing …
relying solely on limited communication with neighboring agents. However, computing …
Neural Distributed Controllers with Port-Hamiltonian Structures
M Zakwan, G Ferrari-Trecate - arXiv preprint arXiv:2403.17785, 2024 - arxiv.org
Controlling large-scale cyber-physical systems necessitates optimal distributed policies,
relying solely on local real-time data and limited communication with neighboring agents …
relying solely on local real-time data and limited communication with neighboring agents …
Learning Stable and Passive Neural Differential Equations
J Cheng, R Wang, IR Manchester - arXiv preprint arXiv:2404.12554, 2024 - arxiv.org
In this paper, we introduce a novel class of neural differential equation, which are
intrinsically Lyapunov stable, exponentially stable or passive. We take a recently proposed …
intrinsically Lyapunov stable, exponentially stable or passive. We take a recently proposed …
Robustneuralnetworks. jl: a package for machine learning and data-driven control with certified robustness
Neural networks are typically sensitive to small input perturbations, leading to unexpected or
brittle behaviour. We present RobustNeuralNetworks. jl: a Julia package for neural network …
brittle behaviour. We present RobustNeuralNetworks. jl: a Julia package for neural network …
Unconstrained Parameterization of Stable LPV Input-Output Models: with Application to System Identification
J Kon, J van de Wijdeven, D Bruijnen, R Tóth… - arXiv preprint arXiv …, 2024 - arxiv.org
Ensuring stability of discrete-time (DT) linear parameter-varying (LPV) input-output (IO)
models estimated via system identification methods is a challenging problem as known …
models estimated via system identification methods is a challenging problem as known …
On Dissipativity of Cross-Entropy Loss in Training ResNets
J Püttschneider, T Faulwasser - arXiv preprint arXiv:2405.19013, 2024 - arxiv.org
The training of ResNets and neural ODEs can be formulated and analyzed from the
perspective of optimal control. This paper proposes a dissipative formulation of the training …
perspective of optimal control. This paper proposes a dissipative formulation of the training …
Learning to Boost the Performance of Stable Nonlinear Systems
The growing scale and complexity of safety-critical control systems underscore the need to
evolve current control architectures aiming for the unparalleled performances achievable …
evolve current control architectures aiming for the unparalleled performances achievable …
Contractive Dynamical Imitation Policies for Efficient Out-of-Sample Recovery
Imitation learning is a data-driven approach to learning policies from expert behavior, but it
is prone to unreliable outcomes in out-of-sample (OOS) regions. While previous research …
is prone to unreliable outcomes in out-of-sample (OOS) regions. While previous research …