Data-Driven Invariant Learning for Probabilistic Programs

06/09/2021
by   Jialu Bao, et al.
0

Morgan and McIver's weakest pre-expectation framework is one of the most well-established methods for deductive verification of probabilistic programs. Roughly, the idea is to generalize binary state assertions to real-valued expectations. While loop-free programs can be analyzed by mechanically transforming expectations, verifying loops usually requires finding an invariant expectation, a difficult task. We propose a new view of invariant expectation synthesis as a regression problem: given an input state, predict the average value of the post-expectation. Guided by this perspective, we develop the first data-driven invariant synthesis method for probabilistic programs. Unlike prior work on probabilistic invariant inference, our approach can learn piecewise continuous invariants without relying on template expectations, and also works when only given black-box access to the program. We implement our approach and demonstrate its effectiveness on a variety of benchmarks from the probabilistic programming literature.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset