Knowledge extraction from the learning of sequences in a long short term memory (LSTM) architecture

12/06/2019
by   Ikram Chraibi Kaadoud, et al.
0

We introduce a general method to extract knowledge from a recurrent neural network (Long Short Term Memory) that has learnt to detect if a given input sequence is valid or not, according to an unknown generative automaton. Based on the clustering of the hidden states, we explain how to build and validate an automaton that corresponds to the underlying (unknown) automaton, and allows to predict if a given sequence is valid or not. The method is illustrated on artificial grammars (Reber's grammar variations) as well as on a real use-case whose underlying grammar is unknown.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2018

On Extended Long Short-term Memory and Dependent Bidirectional Recurrent Neural Network

In this work, we investigate the memory capability of recurrent neural n...
research
06/12/2018

Knowledge Amalgam: Generating Jokes and Quotes Together

Generating humor and quotes are very challenging problems in the field o...
research
10/03/2018

Action Model Acquisition using LSTM

In the field of Automated Planning and Scheduling (APS), intelligent age...
research
12/30/2022

Deep Recurrent Learning Through Long Short Term Memory and TOPSIS

Enterprise resource planning (ERP) software brings resources, data toget...
research
07/15/2017

Knowledge-Guided Recurrent Neural Network Learning for Task-Oriented Action Prediction

This paper aims at task-oriented action prediction, i.e., predicting a s...
research
08/25/2018

Neural Task Planning with And-Or Graph Representations

This paper focuses on semantic task planning, i.e., predicting a sequenc...
research
06/22/2018

Persistent Hidden States and Nonlinear Transformation for Long Short-Term Memory

Recurrent neural networks (RNNs) have been drawing much attention with g...

Please sign up or login with your details

Forgot password? Click here to reset