Formal Verification of Input-Output Mappings of Tree Ensembles

05/10/2019
by   John Törnblom, et al.
0

Recent advances in machine learning and artificial intelligence are now being considered in safety-critical autonomous systems where software defects may cause severe harm to humans and the environment. Design organizations in these domains are currently unable to provide convincing arguments that their systems are safe to operate when machine learning algorithms are used to implement their software. In this paper, we present an efficient method to extract equivalence classes from decision trees and tree ensembles, and to formally verify that their input-output mappings comply with requirements. The idea is that, given that safety requirements can be traced to desirable properties on system input-output patterns, we can use positive verification outcomes in safety arguments. This paper presents the implementation of the method in the tool VoTE (Verifier of Tree Ensembles), and evaluates its scalability on two case studies presented in current literature. We demonstrate that our method is practical for tree ensembles trained on low-dimensional data with up to 25 decision trees and tree depths of up to 20. Our work also studies the limitations of the method with high-dimensional data and preliminarily investigates the trade-off between large number of trees and time taken for verification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2021

Scaling up Memory-Efficient Formal Verification Tools for Tree Ensembles

To guarantee that machine learning models yield outputs that are not onl...
research
04/06/2021

A Review of Formal Methods applied to Machine Learning

We review state-of-the-art formal methods applied to the emerging field ...
research
12/03/2021

A Flexible HLS Hoeffding Tree Implementation for Runtime Learning on FPGA

Decision trees are often preferred when implementing Machine Learning in...
research
04/26/2019

Formal Verification of Decision-Tree Ensemble Model and Detection of its Violating-input-value Ranges

As one type of machine-learning model, a "decision-tree ensemble model" ...
research
11/02/2022

On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Interpretable and explainable machine learning has seen a recent surge o...
research
09/29/2022

Understanding Interventional TreeSHAP : How and Why it Works

Shapley values are ubiquitous in interpretable Machine Learning due to t...

Please sign up or login with your details

Forgot password? Click here to reset