Top-k off-policy correction for a REINFORCE recommender system
Industrial recommender systems deal with extremely large action spaces--many millions of
items to recommend. Moreover, they need to serve billions of users, who are unique at any …
items to recommend. Moreover, they need to serve billions of users, who are unique at any …
Robustness of LSTM neural networks for multi-step forecasting of chaotic time series
M Sangiorgio, F Dercole - Chaos, Solitons & Fractals, 2020 - Elsevier
Recurrent neurons (and in particular LSTM cells) demonstrated to be efficient when used as
basic blocks to build sequence to sequence architectures, which represent the state-of-the …
basic blocks to build sequence to sequence architectures, which represent the state-of-the …
Forecasting sequential data using consistent koopman autoencoders
Recurrent neural networks are widely used on time series data, yet such models often
ignore the underlying physical structures in such sequences. A new class of physics-based …
ignore the underlying physical structures in such sequences. A new class of physics-based …
Deep learning for hand gesture recognition on skeletal data
G Devineau, F Moutarde, W Xi… - 2018 13th IEEE …, 2018 - ieeexplore.ieee.org
In this paper, we introduce a new 3D hand gesture recognition approach based on a deep
learning model. We propose a new Convolutional Neural Network (CNN) where sequences …
learning model. We propose a new Convolutional Neural Network (CNN) where sequences …
Provable benefit of orthogonal initialization in optimizing deep linear networks
The selection of initial parameter values for gradient-based optimization of deep neural
networks is one of the most impactful hyperparameter choices in deep learning systems …
networks is one of the most impactful hyperparameter choices in deep learning systems …
Coupled oscillatory recurrent neural network (cornn): An accurate and (gradient) stable architecture for learning long time dependencies
Circuits of biological neurons, such as in the functional parts of the brain can be modeled as
networks of coupled oscillators. Inspired by the ability of these systems to express a rich set …
networks of coupled oscillators. Inspired by the ability of these systems to express a rich set …
Stable recurrent models
Stability is a fundamental property of dynamical systems, yet to this date it has had little
bearing on the practice of recurrent neural networks. In this work, we conduct a thorough …
bearing on the practice of recurrent neural networks. In this work, we conduct a thorough …
Beyond exploding and vanishing gradients: analysing RNN training using attractors and smoothness
The exploding and vanishing gradient problem has been the major conceptual principle
behind most architecture and training improvements in recurrent neural networks (RNNs) …
behind most architecture and training improvements in recurrent neural networks (RNNs) …
Preventing gradient explosions in gated recurrent units
S Kanai, Y Fujiwara, S Iwamura - Advances in neural …, 2017 - proceedings.neurips.cc
A gated recurrent unit (GRU) is a successful recurrent neural network architecture for time-
series data. The GRU is typically trained using a gradient-based method, which is subject to …
series data. The GRU is typically trained using a gradient-based method, which is subject to …
Values of user exploration in recommender systems
Reinforcement Learning (RL) has been sought after to bring next-generation recommender
systems to further improve user experience on recommendation platforms. While the …
systems to further improve user experience on recommendation platforms. While the …