
A Counting Semantics for Monitoring LTL Specifications over Finite Traces
We consider the problem of monitoring a Linear Time Logic (LTL) specific...
read it

A Temporal Logic for Asynchronous Hyperproperties
Hyperproperties are properties of computational systems that require mor...
read it

The Hierarchy of Hyperlogics
Hyperproperties, which generalize trace properties by relating multiple ...
read it

Connecting What to Say With Where to Look by Modeling Human Attention Traces
We introduce a unified framework to jointly model images, text, and huma...
read it

Steps and Traces
In the theory of coalgebras, trace semantics can be defined in various d...
read it

A Constructive Equivalence between Computation Tree Logic and Failure Trace Testing
The two major systems of formal verification are model checking and alge...
read it

AggregateDriven Trace Visualizations for Performance Debugging
Performance issues in cloud systems are hard to debug. Distributed traci...
read it
Teaching Temporal Logics to Neural Networks
We show that a deep neural network can learn the semantics of lineartime temporal logic (LTL). As a challenging task that requires deep understanding of the LTL semantics, we show that our network can solve the trace generation problem for LTL: given a satisfiable LTL formula, find a trace that satisfies the formula. We frame the trace generation problem for LTL as a translation task, i.e., to translate from formulas to satisfying traces, and train an offtheshelf implementation of the Transformer, a recently introduced deep learning architecture proposed for solving natural language processing tasks. We provide a detailed analysis of our experimental results, comparing multiple hyperparameter settings and formula representations. After training for several hours on a single GPU the results were surprising: the Transformer returns the syntactically equivalent trace in 89 of the "mispredictions", however, (and overall more than 99 traces) still satisfy the given LTL formula. In other words, the Transformer generalized from imperfect training data to the semantics of LTL.
READ FULL TEXT
Comments
There are no comments yet.