A Formal Language Approach to Explaining RNNs

by   Bishwamittra Ghosh, et al.

This paper presents LEXR, a framework for explaining the decision making of recurrent neural networks (RNNs) using a formal description language called Linear Temporal Logic (LTL). LTL is the de facto standard for the specification of temporal properties in the context of formal verification and features many desirable properties that make the generated explanations easy for humans to interpret: it is a descriptive language, it has a variable-free syntax, and it can easily be translated into plain English. To generate explanations, LEXR follows the principle of counterexample-guided inductive synthesis and combines Valiant's probably approximately correct learning (PAC) with constraint solving. We prove that LEXR's explanations satisfy the PAC guarantee (provided the RNN can be described by LTL) and show empirically that these explanations are more accurate and easier-to-understand than the ones generated by recent algorithms that extract deterministic finite automata from RNNs.



There are no comments yet.


page 1

page 2

page 3

page 4


Probably Approximately Correct Explanations of Machine Learning Models via Syntax-Guided Synthesis

We propose a novel approach to understanding the decision making of comp...

Verifiable RNN-Based Policies for POMDPs Under Temporal Logic Constraints

Recurrent neural networks (RNNs) have emerged as an effective representa...

Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks

We investigate the internal representations that a recurrent neural netw...

LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation

Recurrent neural networks (RNNs) are temporal networks and cumulative in...

Property-Directed Verification of Recurrent Neural Networks

This paper presents a property-directed approach to verifying recurrent ...

Counterexample-Guided Strategy Improvement for POMDPs Using Recurrent Neural Networks

We study strategy synthesis for partially observable Markov decision pro...

A Formal Hierarchy of RNN Architectures

We develop a formal hierarchy of the expressive capacity of RNN architec...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.