The Impact of Hole Geometry on Relative Robustness of In-Painting Networks: An Empirical Study

03/04/2020
by   Masood S. Mortazavi, et al.
0

In-painting networks use existing pixels to generate appropriate pixels to fill "holes" placed on parts of an image. A 2-D in-painting network's input usually consists of (1) a three-channel 2-D image, and (2) an additional channel for the "holes" to be in-painted in that image. In this paper, we study the robustness of a given in-painting neural network against variations in hole geometry distributions. We observe that the robustness of an in-painting network is dependent on the probability distribution function (PDF) of the hole geometry presented to it during its training even if the underlying image dataset used (in training and testing) does not alter. We develop an experimental methodology for testing and evaluating relative robustness of in-painting networks against four different kinds of hole geometry PDFs. We examine a number of hypothesis regarding (1) the natural bias of in-painting networks to the hole distribution used for their training, (2) the underlying dataset's ability to differentiate relative robustness as hole distributions vary in a train-test (cross-comparison) grid, and (3) the impact of the directional distribution of edges in the holes and in the image dataset. We present results for L1, PSNR and SSIM quality metrics and develop a specific measure of relative in-painting robustness to be used in cross-comparison grids based on these quality metrics. (One can incorporate other quality metrics in this relative measure.) The empirical work reported here is an initial step in a broader and deeper investigation of "filling the blank" neural networks' sensitivity, robustness and regularization with respect to hole "geometry" PDFs, and it suggests further research in this domain.

READ FULL TEXT
research
12/27/2018

Evaluating Generative Adversarial Networks on Explicitly Parameterized Distributions

The true distribution parameterizations of commonly used image datasets ...
research
05/01/2020

Evaluating Robustness to Input Perturbations for Neural Machine Translation

Neural Machine Translation (NMT) models are sensitive to small perturbat...
research
04/08/2022

Labeling-Free Comparison Testing of Deep Learning Models

Various deep neural networks (DNNs) are developed and reported for their...
research
06/07/2019

Resampling-based Assessment of Robustness to Distribution Shift for Deep Neural Networks

A novel resampling framework is proposed to evaluate the robustness and ...
research
10/27/2018

Towards Robust Deep Neural Networks

We examine the relationship between the energy landscape of neural netwo...
research
10/29/2019

Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications

We develop techniques to quantify the degree to which a given (training ...
research
04/14/2017

How Robust Are Character-Based Word Embeddings in Tagging and MT Against Wrod Scramlbing or Randdm Nouse?

This paper investigates the robustness of NLP against perturbed word for...

Please sign up or login with your details

Forgot password? Click here to reset