DeepAI AI Chat
Log In Sign Up

Counterfactual Explanations for Models of Code

by   Jürgen Cito, et al.

Machine learning (ML) models play an increasingly prevalent role in many software engineering tasks. However, because most models are now powered by opaque deep neural networks, it can be difficult for developers to understand why the model came to a certain conclusion and how to act upon the model's prediction. Motivated by this problem, this paper explores counterfactual explanations for models of source code. Such counterfactual explanations constitute minimal changes to the source code under which the model "changes its mind". We integrate counterfactual explanation generation to models of source code in a real-world setting. We describe considerations that impact both the ability to find realistic and plausible counterfactual explanations, as well as the usefulness of such explanation to the user of the model. In a series of experiments we investigate the efficacy of our approach on three different models, each based on a BERT-like architecture operating over source code.


page 3

page 4


DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models

With machine learning models being increasingly applied to various decis...

Augmenting Diffs With Runtime Information

Source code diffs are used on a daily basis as part of code review, insp...

Learning to Learn to Predict Performance Regressions in Production at Meta

Catching and attributing code change-induced performance regressions in ...

Interprocess Communication in FreeBSD 11: Performance Analysis

Interprocess communication, IPC, is one of the most fundamental function...

Finding Regions of Counterfactual Explanations via Robust Optimization

Counterfactual explanations play an important role in detecting bias and...