DeepAI AI Chat
Log In Sign Up

Systematic Classification of Attackers via Bounded Model Checking

11/13/2019
by   Eric Rothstein-Morris, et al.
Singapore Management University
Singapore University of Technology and Design
0

In this work, we study the problem of verification of systems in the presence of attackers using bounded model checking. Given a system and a set of security requirements, we present a methodology to generate and classify attackers, mapping them to the set of requirements that they can break. A naive approach suffers from the same shortcomings of any large model checking problem, i.e., memory shortage and exponential time. To cope with these shortcomings, we describe two sound heuristics based on cone-of-influence reduction and on learning, which we demonstrate empirically by applying our methodology to a set of hardware benchmark systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/16/2018

Quantifying Attacker Capability Via Model Checking Multiple Properties (Extended Version)

This work aims to solve a practical problem, i.e., how to quantify the r...
07/25/2018

Model Checking Quantum Systems --- A Survey

This article discusses the essential difficulties in developing model-ch...
11/20/2020

Experiences from Large-Scale Model Checking: Verification of a Vehicle Control System

In the age of autonomously driving vehicles, functionality and complexit...
10/20/2019

MSO-Definable Regular Model Checking

Regular Model Checking (RMC) is a symbolic model checking technique wher...
05/08/2020

Synthesizing Safe Policies under Probabilistic Constraints with Reinforcement Learning and Bayesian Model Checking

In this paper we propose Policy Synthesis under probabilistic Constraint...
09/18/2020

Bounded Model Checking for Hyperproperties

This paper introduces the first bounded model checking (BMC) algorithm f...