The Geometry of Over-parameterized Regression and Adversarial Perturbations

03/25/2021
by   Jason W. Rocks, et al.
0

Classical regression has a simple geometric description in terms of a projection of the training labels onto the column space of the design matrix. However, for over-parameterized models – where the number of fit parameters is large enough to perfectly fit the training data – this picture becomes uninformative. Here, we present an alternative geometric interpretation of regression that applies to both under- and over-parameterized models. Unlike the classical picture which takes place in the space of training labels, our new picture resides in the space of input features. This new feature-based perspective provides a natural geometric interpretation of the double-descent phenomenon in the context of bias and variance, explaining why it can occur even in the absence of label noise. Furthermore, we show that adversarial perturbations – small perturbations to the input features that result in large changes in label values – are a generic feature of biased models, arising from the underlying geometry. We demonstrate these ideas by analyzing three minimal models for over-parameterized linear least squares regression: without basis functions (input features equal model features) and with linear or nonlinear basis functions (two-layer neural networks with linear or nonlinear activation functions, respectively).

READ FULL TEXT

page 20

page 21

research
10/26/2020

Memorizing without overfitting: Bias, variance, and interpolation in over-parameterized models

The bias-variance trade-off is a central concept in supervised learning....
research
03/10/2022

Bias-variance decomposition of overparameterized regression with random linear features

In classical statistics, the bias-variance trade-off describes how varyi...
research
06/12/2022

An Efficient Method for Sample Adversarial Perturbations against Nonlinear Support Vector Machines

Adversarial perturbations have drawn great attentions in various machine...
research
11/08/2018

A Geometric Perspective on the Transferability of Adversarial Directions

State-of-the-art machine learning models frequently misclassify inputs t...
research
09/27/2021

Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective

State-of-the-art deep learning classifiers are heavily overparameterized...
research
10/02/2017

DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks

Deep neural networks have become widely used, obtaining remarkable resul...
research
10/07/2022

Toward an Over-parameterized Direct-Fit Model of Visual Perception

In this paper, we revisit the problem of computational modeling of simpl...

Please sign up or login with your details

Forgot password? Click here to reset