Neural Network Virtual Sensors for Fuel Injection Quantities with Provable Performance Specifications

06/30/2020
by   Eric Wong, et al.
0

Recent work has shown that it is possible to learn neural networks with provable guarantees on the output of the model when subject to input perturbations, however these works have focused primarily on defending against adversarial examples for image classifiers. In this paper, we study how these provable guarantees can be naturally applied to other real world settings, namely getting performance specifications for robust virtual sensors measuring fuel injection quantities within an engine. We first demonstrate that, in this setting, even simple neural network models are highly susceptible to reasonable levels of adversarial sensor noise, which are capable of increasing the mean relative error of a standard neural network from 6.6 leverage methods for learning provably robust networks and verifying robustness properties, resulting in a robust model which we can provably guarantee has at most 16.5 how specific intervals of fuel injection quantities can be targeted to maximize robustness for certain ranges, allowing us to train a virtual sensor for fuel injection which is provably guaranteed to have at most 10.69 under noise while maintaining 3 normalized fuel injection ranges of 0.6 to 1.0.

READ FULL TEXT
research
05/27/2019

Provable robustness against all adversarial l_p-perturbations for p≥ 1

In recent years several adversarial attacks and defenses have been propo...
research
11/12/2019

On Robustness to Adversarial Examples and Polynomial Optimization

We study the design of computationally efficient algorithms with provabl...
research
06/08/2020

A Diffractive Neural Network with Weight-Noise-Injection Training

We propose a diffractive neural network with strong robustness based on ...
research
05/20/2022

Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness

Adversarial examples pose a security risk as they can alter a classifier...
research
10/27/2022

Noise Injection Node Regularization for Robust Learning

We introduce Noise Injection Node Regularization (NINR), a method of inj...
research
11/15/2020

Towards Understanding the Regularization of Adversarial Robustness on Neural Networks

The problem of adversarial examples has shown that modern Neural Network...
research
02/18/2021

Verifying Probabilistic Specifications with Functional Lagrangians

We propose a general framework for verifying input-output specifications...

Please sign up or login with your details

Forgot password? Click here to reset