A Provable Defense for Deep Residual Networks

03/29/2019
by   Matthew Mirman, et al.
0

We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100. Our approach is based on differentiable abstract interpretation and introduces two novel concepts: (i) abstract layers for fine-tuning the precision and scalability of the abstraction, (ii) a flexible domain specific language (DSL) for describing training objectives that combine abstract and concrete losses with arbitrary specifications. Our training method is implemented in the DiffAI system.

READ FULL TEXT
research
08/07/2020

An Experiment Combining Specialization with Abstract Interpretation

It was previously shown that control-flow refinement can be achieved by ...
research
09/07/2021

Improving Dynamic Code Analysis by Code Abstraction

In this paper, our aim is to propose a model for code abstraction, based...
research
03/17/2022

Abstract Interpretation on E-Graphs

Recent e-graph applications have typically considered concrete semantics...
research
06/17/2016

Learning Abstract Classes using Deep Learning

Humans are generally good at learning abstract concepts about objects an...
research
05/02/2021

Synthesizing Abstract Transformers

This paper addresses the problem of creating abstract transformers autom...
research
02/23/2023

Does Deep Learning Learn to Abstract? A Systematic Probing Framework

Abstraction is a desirable capability for deep learning models, which me...
research
10/18/2021

Semantic network analysis of abstract and concrete word associations

In recent years, a new interest for the use of graph-theory based networ...

Please sign up or login with your details

Forgot password? Click here to reset