Little Tricky Logic: Misconceptions in the Understanding of LTL

11/03/2022
by   Ben Greenman, et al.
0

Context: Linear Temporal Logic (LTL) has been used widely in verification. Its importance and popularity have only grown with the revival of temporal logic synthesis, and with new uses of LTL in robotics and planning activities. All these uses demand that the user have a clear understanding of what an LTL specification means. Inquiry: Despite the growing use of LTL, no studies have investigated the misconceptions users actually have in understanding LTL formulas. This paper addresses the gap with a first study of LTL misconceptions. Approach: We study researchers' and learners' understanding of LTL in four rounds (three written surveys, one talk-aloud) spread across a two-year timeframe. Concretely, we decompose "understanding LTL" into three questions. A person reading a spec needs to understand what it is saying, so we study the mapping from LTL to English. A person writing a spec needs to go in the other direction, so we study English to LTL. However, misconceptions could arise from two sources: a misunderstanding of LTL's syntax or of its underlying semantics. Therefore, we also study the relationship between formulas and specific traces. Knowledge: We find several misconceptions that have consequences for learners, tool builders, and designers of new property languages. These findings are already resulting in changes to the Alloy modeling language. We also find that the English to LTL direction was the most common source of errors; unfortunately, this is the critical "authoring" direction in which a subtle mistake can lead to a faulty system. We contribute study instruments that are useful for training learners (whether academic or industrial) who are getting acquainted with LTL, and we provide a code book to assist in the analysis of responses to similar-style questions. Grounding: Our findings are grounded in the responses to our survey rounds. Round 1 used Quizius to identify misconceptions among learners in a way that reduces the threat of expert blind spots. Rounds 2 and 3 confirm that both additional learners and researchers (who work in formal methods, robotics, and related fields) make similar errors. Round 4 adds deep support for our misconceptions via talk-aloud surveys. Importance This work provides useful answers to two critical but unexplored questions: in what ways is LTL tricky and what can be done about it? Our survey instruments can serve as a starting point for other studies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2021

A Framework for Learning Assessment through Multimodal Analysis of Reading Behaviour and Language Comprehension

Reading comprehension, which has been defined as gaining an understandin...
research
01/10/2022

A Compositional Proof Framework for FRETish Requirements

Structured natural languages provide a trade space between ambiguous nat...
research
06/09/2020

SoK: Attacks on Industrial Control Logic and Formal Verification-Based Defenses

Control logic programs play a critical role in industrial control system...
research
05/11/2021

PTeacher: a Computer-Aided Personalized Pronunciation Training System with Exaggerated Audio-Visual Corrective Feedback

Second language (L2) English learners often find it difficult to improve...
research
10/13/2021

Scalable Anytime Algorithms for Learning Formulas in Linear Temporal Logic

Linear temporal logic (LTL) is a specification language for finite seque...
research
02/16/2023

Reanalyzing L2 Preposition Learning with Bayesian Mixed Effects and a Large Language Model

We use both Bayesian and neural models to dissect a data set of Chinese ...
research
09/20/2021

Understanding Xacro Misunderstandings

The Xacro XML macro language extends the Universal Robot Description For...

Please sign up or login with your details

Forgot password? Click here to reset