Generalizing Neural Networks by Reflecting Deviating Data in Production

10/06/2021
by   Yan Xiao, et al.
0

Trained with a sufficiently large training and testing dataset, Deep Neural Networks (DNNs) are expected to generalize. However, inputs may deviate from the training dataset distribution in real deployments. This is a fundamental issue with using a finite dataset. Even worse, real inputs may change over time from the expected distribution. Taken together, these issues may lead deployed DNNs to mis-predict in production. In this work, we present a runtime approach that mitigates DNN mis-predictions caused by the unexpected runtime inputs to the DNN. In contrast to previous work that considers the structure and parameters of the DNN itself, our approach treats the DNN as a blackbox and focuses on the inputs to the DNN. Our approach has two steps. First, it recognizes and distinguishes "unseen" semantically-preserving inputs. For this we use a distribution analyzer based on the distance metric learned by a Siamese network. Second, our approach transforms those unexpected inputs into inputs from the training set that are identified as having similar semantics. We call this process input reflection and formulate it as a search problem over the embedding space on the training set. This embedding space is learned by a Quadruplet network as an auxiliary model for the subject model to improve the generalization. We implemented a tool called InputReflector based on the above two-step approach and evaluated it with experiments on three DNN models trained on CIFAR-10, MNIST, and FMINST image datasets. The results show that InputReflector can effectively distinguish inputs that retain semantics of the distribution (e.g., blurred, brightened, contrasted, and zoomed images) and out-of-distribution inputs from normal inputs.

READ FULL TEXT

page 1

page 3

research
02/15/2019

DeepFault: Fault Localization for Deep Neural Networks

Deep Neural Networks (DNNs) are increasingly deployed in safety-critical...
research
05/16/2022

Prioritizing Corners in OoD Detectors via Symbolic String Manipulation

For safety assurance of deep neural networks (DNNs), out-of-distribution...
research
04/10/2021

Use of Metamorphic Relations as Knowledge Carriers to Train Deep Neural Networks

Training multiple-layered deep neural networks (DNNs) is difficult. The ...
research
06/18/2020

An Investigation of the Weight Space for Version Control of Neural Networks

Deployed Deep Neural Networks (DNNs) are often trained further to improv...
research
05/01/2020

Computing the Testing Error without a Testing Set

Deep Neural Networks (DNNs) have revolutionized computer vision. We now ...
research
10/10/2022

Unveiling Hidden DNN Defects with Decision-Based Metamorphic Testing

Contemporary DNN testing works are frequently conducted using metamorphi...
research
03/25/2020

Deep Networks as Logical Circuits: Generalization and Interpretation

Not only are Deep Neural Networks (DNNs) black box models, but also we f...

Please sign up or login with your details

Forgot password? Click here to reset