Learning causal state representations of partially observable environments

A Zhang, ZC Lipton, L Pineda… - arXiv preprint arXiv …, 2019 - arxiv.org
Intelligent agents can cope with sensory-rich environments by learning task-agnostic state
abstractions. In this paper, we propose an algorithm to approximate causal states, which are …

State-regularized recurrent neural networks

C Wang, M Niepert - International Conference on Machine …, 2019 - proceedings.mlr.press
Recurrent neural networks are a widely used class of neural architectures with two
shortcomings. First, it is difficult to understand what exactly they learn. Second, they tend to …

State-regularized recurrent neural networks to extract automata and explain predictions

C Wang, C Lawrence, M Niepert - IEEE Transactions on Pattern …, 2022 - ieeexplore.ieee.org
Recurrent neural networks are a widely used class of neural architectures. They have,
however, two shortcomings. First, they are often treated as black-box models and as such it …

[HTML][HTML] Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI

IC Kaadoud, A Bennetot, B Mawhin, V Charisi… - Neural Networks, 2022 - Elsevier
During the learning process, a child develops a mental representation of the task he or she
is learning. A Machine Learning algorithm develops also a latent representation of the task it …

Knowledge extraction from the learning of sequences in a long short term memory (LSTM) architecture

IC Kaadoud, NP Rougier, F Alexandre - Knowledge-Based Systems, 2022 - Elsevier
Transparency and trust in machine learning algorithms have been deemed to be
fundamental and yet, from a practical point of view, they remain difficult to implement …

Verification of recurrent neural networks through rule extraction

Q Wang, K Zhang, X Liu, CL Giles - arXiv preprint arXiv:1811.06029, 2018 - arxiv.org
The verification problem for neural networks is verifying whether a neural network will suffer
from adversarial samples, or approximating the maximal allowed scale of adversarial …

Property checking with interpretable error characterization for recurrent neural networks

F Mayr, S Yovine, R Visca - Machine Learning and Knowledge Extraction, 2021 - mdpi.com
This paper presents a novel on-the-fly, black-box, property-checking through learning
approach as a means for verifying requirements of recurrent neural networks (RNN) in the …

Extracting automata from recurrent neural networks using queries and counterexamples (extended version)

G Weiss, Y Goldberg, E Yahav - Machine Learning, 2024 - Springer
We consider the problem of extracting a deterministic finite automaton (DFA) from a trained
recurrent neural network (RNN). We present a novel algorithm that uses exact learning and …

On-the-fly black-box probably approximately correct checking of recurrent neural networks

F Mayr, R Visca, S Yovine - … Learning and Knowledge Extraction: 4th IFIP …, 2020 - Springer
We propose a procedure for checking properties of recurrent neural networks used for
language modeling and sequence classification. Our approach is a case of black-box …

[图书][B] State abstractions for generalization in reinforcement learning

A Zhang - 2021 - search.proquest.com
The advent of deep learning has shepherded unprecedented progress in supervised
learning through learned representations that generalize. However, good representations in …