Model Vulnerability to Distributional Shifts over Image Transformation Sets

03/28/2019
by   Riccardo Volpi, et al.
10

We are concerned with the vulnerability of computer vision models to distributional shifts. We cast this problem in terms of combinatorial optimization, evaluating the regions in the input space where a (black-box) model is more vulnerable. This is carried out by combining image transformations from a given set and standard search algorithms. We embed this idea in a training procedure, where we define new data augmentation rules over iterations, accordingly to the image transformations that the current model is most vulnerable to. An empirical evaluation on classification and semantic segmentation problems suggests that the devised algorithm allows to train models more robust against content-preserving image transformations, and in general, against distributional shifts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset