Learning causal state representations of partially observable environments
Intelligent agents can cope with sensory-rich environments by learning task-agnostic state
abstractions. In this paper, we propose an algorithm to approximate causal states, which are …
abstractions. In this paper, we propose an algorithm to approximate causal states, which are …
State-regularized recurrent neural networks
Recurrent neural networks are a widely used class of neural architectures with two
shortcomings. First, it is difficult to understand what exactly they learn. Second, they tend to …
shortcomings. First, it is difficult to understand what exactly they learn. Second, they tend to …
State-regularized recurrent neural networks to extract automata and explain predictions
Recurrent neural networks are a widely used class of neural architectures. They have,
however, two shortcomings. First, they are often treated as black-box models and as such it …
however, two shortcomings. First, they are often treated as black-box models and as such it …
[HTML][HTML] Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI
IC Kaadoud, A Bennetot, B Mawhin, V Charisi… - Neural Networks, 2022 - Elsevier
During the learning process, a child develops a mental representation of the task he or she
is learning. A Machine Learning algorithm develops also a latent representation of the task it …
is learning. A Machine Learning algorithm develops also a latent representation of the task it …
Knowledge extraction from the learning of sequences in a long short term memory (LSTM) architecture
IC Kaadoud, NP Rougier, F Alexandre - Knowledge-Based Systems, 2022 - Elsevier
Transparency and trust in machine learning algorithms have been deemed to be
fundamental and yet, from a practical point of view, they remain difficult to implement …
fundamental and yet, from a practical point of view, they remain difficult to implement …
Verification of recurrent neural networks through rule extraction
The verification problem for neural networks is verifying whether a neural network will suffer
from adversarial samples, or approximating the maximal allowed scale of adversarial …
from adversarial samples, or approximating the maximal allowed scale of adversarial …
Property checking with interpretable error characterization for recurrent neural networks
This paper presents a novel on-the-fly, black-box, property-checking through learning
approach as a means for verifying requirements of recurrent neural networks (RNN) in the …
approach as a means for verifying requirements of recurrent neural networks (RNN) in the …
Extracting automata from recurrent neural networks using queries and counterexamples (extended version)
We consider the problem of extracting a deterministic finite automaton (DFA) from a trained
recurrent neural network (RNN). We present a novel algorithm that uses exact learning and …
recurrent neural network (RNN). We present a novel algorithm that uses exact learning and …
On-the-fly black-box probably approximately correct checking of recurrent neural networks
We propose a procedure for checking properties of recurrent neural networks used for
language modeling and sequence classification. Our approach is a case of black-box …
language modeling and sequence classification. Our approach is a case of black-box …
[图书][B] State abstractions for generalization in reinforcement learning
A Zhang - 2021 - search.proquest.com
The advent of deep learning has shepherded unprecedented progress in supervised
learning through learned representations that generalize. However, good representations in …
learning through learned representations that generalize. However, good representations in …