A Tropical Approach to Neural Networks with Piecewise Linear Activations

05/22/2018
by   Vasileios Charisopoulos, et al.
0

We present a new, unifying approach following some recent developments on the complexity of neural networks with piecewise linear activations. We treat neural network layers with piecewise linear activations, such as Maxout or ReLU units, as polynomials in the (, +) (or so-called tropical) algebra. Following up on the work of Montufar et al. (arXiv:1402.1869), this approach enables us to improve their upper bound on linear regions of layers with ReLU or leaky ReLU activations to { 2^m, 2 ·∑_j=0^n m - 1j}, where n, m are the number of inputs and outputs, respectively. Additionally, we recover their upper bounds on maxout layers. Our work is parallel to the improvements reported in (arXiv:1711.02114, arXiv:1611.01491), though exclusively under the lens of tropical geometry.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/14/2020

Bounding The Number of Linear Regions in Local Area for Neural Networks with ReLU Activations

The number of linear regions is one of the distinct properties of the ne...
research
03/05/2021

Precise Multi-Neuron Abstractions for Neural Network Certification

Formal verification of neural networks is critical for their safe adopti...
research
06/12/2023

Polyhedral Complex Extraction from ReLU Networks using Edge Subdivision

A neural network consisting of piecewise affine building blocks, such as...
research
12/20/2013

On the number of response regions of deep feed forward networks with piece-wise linear activations

This paper explores the complexity of deep feedforward networks with lin...
research
10/31/2018

Nearly-tight bounds on linear regions of piecewise linear neural networks

The developments of deep neural networks (DNN) in recent years have ushe...
research
03/10/2022

On Embeddings for Numerical Features in Tabular Deep Learning

Recently, Transformer-like deep architectures have shown strong performa...
research
03/31/2022

Adversarial Examples in Random Neural Networks with General Activations

A substantial body of empirical work documents the lack of robustness in...

Please sign up or login with your details

Forgot password? Click here to reset