Why Do Networks Need Negative Weights?

08/05/2022
by   Qingyang Wang, et al.
118

Why do networks have negative weights at all? The answer is: to learn more functions. We mathematically prove that deep neural networks with all non-negative weights are not universal approximators. This fundamental result is assumed by much of the deep learning literature without previously proving the result and demonstrating its necessity.

READ FULL TEXT
10/23/2018

Some negative results for Neural Networks

We demonstrate some negative results for approximation of functions with...
04/20/2018

Metrics that respect the support

In this work we explore the family of metrics determined by S-weights, i...
05/28/2019

Machine Learning on data with sPlot background subtraction

Data analysis in high energy physics has to deal with data samples produ...
07/20/2022

Fixed Points of Cone Mapping with the Application to Neural Networks

We derive conditions for the existence of fixed points of cone mappings ...
06/14/2021

An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks

It is well known that modern deep neural networks are powerful enough to...
06/05/2020

Hardness of Learning Neural Networks with Natural Weights

Neural networks are nowadays highly successful despite strong hardness r...
07/18/2022

Non-negative Least Squares via Overparametrization

In many applications, solutions of numerical problems are required to be...