LOT: Layer-wise Orthogonal Training on Improving l2 Certified Robustness

10/20/2022
by   Xiaojun Xu, et al.
0

Recent studies show that training deep neural networks (DNNs) with Lipschitz constraints are able to enhance adversarial robustness and other model properties such as stability. In this paper, we propose a layer-wise orthogonal training method (LOT) to effectively train 1-Lipschitz convolution layers via parametrizing an orthogonal matrix with an unconstrained matrix. We then efficiently compute the inverse square root of a convolution kernel by transforming the input domain to the Fourier frequency domain. On the other hand, as existing works show that semi-supervised training helps improve empirical robustness, we aim to bridge the gap and prove that semi-supervised learning also improves the certified robustness of Lipschitz-bounded models. We conduct comprehensive evaluations for LOT under different settings. We show that LOT significantly outperforms baselines regarding deterministic l2 certified robustness, and scales to deeper neural networks. Under the supervised scenario, we improve the state-of-the-art certified robustness for all architectures (e.g. from 59.04 34.59 semi-supervised learning over unlabelled data, we are able to improve state-of-the-art certified robustness on CIFAR-10 at rho = 108/255 from 36.04 to 42.39 model architectures with only 1/3 evaluation time.

READ FULL TEXT
research
11/15/2022

Improved techniques for deterministic l2 robustness

Training convolutional neural networks (CNNs) with a strict 1-Lipschitz ...
research
03/20/2023

Lipschitz-bounded 1D convolutional neural networks using the Cayley transform and the controllability Gramian

We establish a layer-wise parameterization for 1D convolutional neural n...
research
05/24/2021

Skew Orthogonal Convolutions

Training convolutional neural networks with a Lipschitz constraint under...
research
07/11/2019

Time2Vec: Learning a Vector Representation of Time

Time is an important feature in many applications involving events that ...
research
11/14/2015

Efficient Training of Very Deep Neural Networks for Supervised Hashing

In this paper, we propose training very deep neural networks (DNNs) for ...
research
07/22/2022

Training Certifiably Robust Neural Networks Against Semantic Perturbations

Semantic image perturbations, such as scaling and rotation, have been sh...
research
07/17/2023

Systematic Testing of the Data-Poisoning Robustness of KNN

Data poisoning aims to compromise a machine learning based software comp...

Please sign up or login with your details

Forgot password? Click here to reset