Can convolutional ResNets approximately preserve input distances? A frequency analysis perspective

06/04/2021
by   Lewis Smith, et al.
0

ResNets constrained to be bi-Lipschitz, that is, approximately distance preserving, have been a crucial component of recently proposed techniques for deterministic uncertainty quantification in neural models. We show that theoretical justifications for recent regularisation schemes trying to enforce such a constraint suffer from a crucial flaw – the theoretical link between the regularisation scheme used and bi-Lipschitzness is only valid under conditions which do not hold in practice, rendering existing theory of limited use, despite the strong empirical performance of these models. We provide a theoretical explanation for the effectiveness of these regularisation schemes using a frequency analysis perspective, showing that under mild conditions these schemes will enforce a lower Lipschitz bound on the low-frequency projection of images. We then provide empirical evidence supporting our theoretical claims, and perform further experiments which demonstrate that our broader conclusions appear to hold when some of the mathematical assumptions of our proof are relaxed, corresponding to the setup used in prior work. In addition, we present a simple constructive algorithm to search for counter examples to the distance preservation condition, and discuss possible implications of our theory for future model design.

READ FULL TEXT

page 4

page 5

page 12

page 13

07/15/2021

On the expressivity of bi-Lipschitz normalizing flows

An invertible function is bi-Lipschitz if both the function and its inve...
12/21/2018

Lipschitz bijections between boolean functions

We answer four questions from a recent paper of Rao and Shinkar on Lipsc...
05/18/2022

Trading Positional Complexity vs. Deepness in Coordinate Networks

It is well noted that coordinate-based MLPs benefit – in terms of preser...
11/02/2021

Lipschitz widths

This paper introduces a measure, called Lipschitz widths, of the optimal...
07/06/2021

Rethinking Positional Encoding

It is well noted that coordinate based MLPs benefit greatly – in terms o...
03/02/2022

A Quantitative Geometric Approach to Neural Network Smoothness

Fast and precise Lipschitz constant estimation of neural networks is an ...
05/06/2020

Towards Frequency-Based Explanation for Robust CNN

Current explanation techniques towards a transparent Convolutional Neura...