Reconstruction of training samples from loss functions

05/18/2018
by   Akiyoshi Sannai, et al.
0

This paper presents a new mathematical framework to analyze the loss functions of deep neural networks with ReLU functions. Furthermore, as as application of this theory, we prove that the loss functions can reconstruct the inputs of the training samples up to scalar multiplication (as vectors) and can provide the number of layers and nodes of the deep neural network. Namely, if we have all input and output of a loss function (or equivalently all possible learning process), for all input of each training sample x_i ∈R^n, we can obtain vectors x'_i∈R^n satisfying x_i=c_ix'_i for some c_i ≠ 0. To prove theorem, we introduce the notion of virtual polynomials, which are polynomials written as the output of a node in a deep neural network. Using virtual polynomials, we find an algebraic structure for the loss surfaces, called semi-algebraic sets. We analyze these loss surfaces from the algebro-geometric point of view. Factorization of polynomials is one of the most standard ideas in algebra. Hence, we express the factorization of the virtual polynomials in terms of their active paths. This framework can be applied to the leakage problem in the training of deep neural networks. The main theorem in this paper indicates that there are many risks associated with the training of deep neural networks. For example, if we have N (the dimension of weight space) + 1 nonsmooth points on the loss surface, which are sufficiently close to each other, we can obtain the input of training sample up to scalar multiplication. We also point out that the structures of the loss surfaces depend on the shape of the deep neural network and not on the training samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2020

Convex Duality of Deep Neural Networks

We study regularized deep neural networks and introduce an analytic fram...
research
06/07/2021

Error Loss Networks

A novel model called error loss network (ELN) is proposed to build an er...
research
01/22/2019

On Connected Sublevel Sets in Deep Learning

We study sublevel sets of the loss function in training deep neural netw...
research
07/04/2023

Deconstructing Data Reconstruction: Multiclass, Weight Decay and General Losses

Memorization of training data is an active research area, yet our unders...
research
06/24/2020

Retrospective Loss: Looking Back to Improve Training of Deep Neural Networks

Deep neural networks (DNNs) are powerful learning machines that have ena...
research
01/31/2018

Optimizing Non-decomposable Measures with Deep Networks

We present a class of algorithms capable of directly training deep neura...
research
05/26/2022

Training ReLU networks to high uniform accuracy is intractable

Statistical learning theory provides bounds on the necessary number of t...

Please sign up or login with your details

Forgot password? Click here to reset