DeepAI AI Chat
Log In Sign Up

Systematic Classification of Attackers via Bounded Model Checking

by   Eric Rothstein-Morris, et al.
Singapore Management University
Singapore University of Technology and Design

In this work, we study the problem of verification of systems in the presence of attackers using bounded model checking. Given a system and a set of security requirements, we present a methodology to generate and classify attackers, mapping them to the set of requirements that they can break. A naive approach suffers from the same shortcomings of any large model checking problem, i.e., memory shortage and exponential time. To cope with these shortcomings, we describe two sound heuristics based on cone-of-influence reduction and on learning, which we demonstrate empirically by applying our methodology to a set of hardware benchmark systems.


page 1

page 2

page 3

page 4


Quantifying Attacker Capability Via Model Checking Multiple Properties (Extended Version)

This work aims to solve a practical problem, i.e., how to quantify the r...

Model Checking Quantum Systems --- A Survey

This article discusses the essential difficulties in developing model-ch...

Experiences from Large-Scale Model Checking: Verification of a Vehicle Control System

In the age of autonomously driving vehicles, functionality and complexit...

MSO-Definable Regular Model Checking

Regular Model Checking (RMC) is a symbolic model checking technique wher...

Synthesizing Safe Policies under Probabilistic Constraints with Reinforcement Learning and Bayesian Model Checking

In this paper we propose Policy Synthesis under probabilistic Constraint...

Bounded Model Checking for Hyperproperties

This paper introduces the first bounded model checking (BMC) algorithm f...