VerifyThis 2019: A Program Verification Competition (Extended Report)

by   Claire Dross, et al.

VerifyThis is a series of program verification competitions that emphasize the human aspect: participants tackle the verification of detailed behavioral properties—something that lies beyond the capabilities of fully automatic verification, and requires instead human expertise to suitably encode programs, specifications, and invariants. This paper describes the 8th edition of VerifyThis, which took place at ETAPS 2019 in Prague. Thirteen teams entered the competition, which consisted of three verification challenges and spanned two days of work. The report analyzes how the participating teams fared on these challenges, reflects on what makes a verification challenge more or less suitable for the typical VerifyThis participants, and outlines the difficulties of comparing the work of teams using wildly different verification approaches in a competition focused on the human aspect.


page 1

page 2

page 3

page 4


Human-Competitive Awards 2018

Report on Humies competition at GECCO 2018 in Japan...

A benchmark for C program verification

We present twenty-five C programs, as a benchmark for C program verifica...

The Second International Verification of Neural Networks Competition (VNN-COMP 2021): Summary and Results

This report summarizes the second International Verification of Neural N...

Voter Verification of BMD Ballots Is a Two-Part Question: Can They? Mostly, They Can. Do They? Mostly, They Don't

The question of whether or not voters actually verify ballots produced b...

Detecting Figures and Part Labels in Patents: Competition-Based Development of Image Processing Algorithms

We report the findings of a month-long online competition in which parti...

Organising a Successful AI Online Conference: Lessons from SoCS 2020

The 13th Symposium on Combinatorial Search (SoCS) was held May 26-28, 20...

Trustworthy Graph Algorithms

The goal of the LEDA project was to build an easy-to-use and extendable ...