Geometric Decomposition of Feed Forward Neural Networks

12/08/2016 ∙ by Sven Cattell, et al. ∙ 0

There have been several attempts to mathematically understand neural networks and many more from biological and computational perspectives. The field has exploded in the last decade, yet neural networks are still treated much like a black box. In this work we describe a structure that is inherent to a feed forward neural network. This will provide a framework for future work on neural networks to improve training algorithms, compute the homology of the network, and other applications. Our approach takes a more geometric point of view and is unlike other attempts to mathematically understand neural networks that rely on a functional perspective.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the last decade deep learning has exploded. With many applications to image recognition, natural language processing, and many other pattern recognition tasks that have traditionally had poor performance with other machine learning techniques seem to be finally cracked. However, deep learning is rather poorly understood mathematically. In the original papers the intuition provided is compelling and has proved to be useful in developing the technique

[6][1][8]. However, this needs some mathematical backing in order to properly use and improve the techniques.

This paper aims to give a description of the parts of a neural network and their functions. In order to simplify the structure of a fully connected feed forward neural network to one that is easily understood we will mainly work with the step edge activation function. This may seem like a step back from the modern use of smooth or piecewise linear activation functions, but if the activation function we want to train against is a approximation of the step edge function (like the sigmoidal function), then we can still make strong inferences of the general structure. This paper focuses on the geometry of the network so, unless otherwise noted, we are using the step-edge activation function everywhere. The main theorem of this paper is:

Theorem 1.1.

A binary classification neural network is an indicator function on a union of some regions in , where is defined by the first layer.

This is not similar to previous work in mathematically understanding shallow neural networks of Kolmogorov [7], Funahashi [5]or Sprecher [12]. They prove that there is some network that can approximate a given function. However this is non-constructive and does not apply to deep networks. The more recent results of [11] and [2] construct well understood networks of arbitrary precision. This paper is intended to give a more general structure theory that can be applied to build networks or manipulate them.

2 Background

2.1 Neural Networks

This part of the background is intended for mathematicians unfamiliar with neural networks. Feed forward neural networks are easily expressed as a composition of linear functions, with a non-linear activation function. We will denote the activation function by . It is always defined as a single variable function. However, it is regularly referred to as being defined on , not just , this is just , or on each coordinate of .

Definition 1.

A classification neural network of height with inputs and outputs is a function such that it can be written as a composition of functions of the form where is some matrix

is an offset vector called

layers. In other words there exists matrices , with offset vectors , and:

Each is a matrix of size and is a -tuple, we say the layer has nodes.

Of course, this is a rather obtuse definition, it is usually expressed as a directed graph. The nodes are organized into layers with directed edges pointing up the layers. These edges are given weights and each node, other than the input nodes, are a linear sum of all nodes that have an edge pointing towards the aforementioned node. Sometimes the offset is represented by an additional input node who is connected to all the nodes in higher layers and has a fixed value of , but here we shall think of the offset as a part of each node that isn’t a input node.

For simplicity, we will mainly discuss a neural network with one output. All the results generalize quite easily to multiple outputs, so they are mostly omitted.

2.2 Hyperplane Arrangements

This section of the background is intended for those familiar with neural networks, but unfamiliar with the language of hyperplane arrangements. It is intended to be a short summary of the necessary language required for the main result. The following is mostly lifted from

[10], which has much more detail that we may provide here.

An arrangement of hyperplanes, in is a finite set of codimension 1 affine subspaces. The regions between the hyperplanes refer to the components of . An arrangement is in general position if a small perturbation of the hyperplanes in the arrangement does not change the number of hyperplanes. For example two parallel lines in are not in general position. An arbitrarily small rotation of one of the lines will result in the number of regions increasing from to . The other essential structure inherent in hyperplane arrangements

Definition 2.

The intersection poset of an arrangement of hyperplanes, is the set

where . This is equipped with the partial order if .

A poset is a set equipped with a partial relation, i.e. if then we may have , and . In a poset, an element covers , or , if there is no different from and such that . We can draw the Hasse diagram of a poset by drawing a node for each element of the poset and a directed edge from the node for to the node for if .

3 Characterization of a Perceptron

If we examine the first layer of a neural network each node corresponds to a weighted sum of the input nodes with an offset. We can represent the weighted sum by a vector , and the offset by . Then the node is “active” on the point , and outputting a if , and otherwise. This is the same as the indicator function on the positive component, , of where is the associated hyperplane the following set:

So, another way of expressing a node on the first layer is by the indicator function on the set , .

Therefore the input, under the step edge activation function, to the second layer of the network is the output of a collection of indicator functions. The second layer has a binary input on variables for each point in the underlying space. Therefore all it can do is make decisions based on which side of each hyperplane the given input point is. We will use to denote a collection of hyperplanes in . This divides into regions, , the connected components of . The second layer is only aware of which region you are in as each region produces a unique signature output of the first layer. The subsequent layers are a means of making a choice of which regions to include, each layer past the first will amount a process we call a weighted union of the sets associated to the nodes of the previous layer.

3.1 Regions in a Polarized Arrangement

A plane in is defined by a normal vector and a offset value . If we require that the normal vector be of length 1, then there are two normal vectors to choose from. Both work, but they define a different orientation of the hyperplane. Therefore, rather than just having two components of we can now distinguish between them.

Definition 3.

A polarization of a hyperplane in for the plane with normal vector and offset are the two sets:

We call the positive side and the negative side of the plane.

We usually index our hyperplanes by the numbers so we write for the positive side of the th hyperplane in this case. Nodes in the first layer of the neural network are the hyperplane layer as they determine a polarized arrangement of hyperplanes.

For a polarizes arrangement of hyperplanes indexed by a set we can define and label the regions.

Definition 4.

The regions, of a polarization arrangement of hyperplanes is the set of convex polytopes formed by taking all possible intersections of the positive and negatives sides of an ordered set of plane partitions. Each region is labeled by a where

We are using a similar definition to [9], but with a subset of instead of a ordered -tuple. The two methods are isomorphic, but we choose to use sets for now. There are different labelings one could have for a hyperplane arrangement, depending on the polarization. See figure 1 for an example.

Figure 1: Two different labellings of the same regions of an arrangement of hyperplanes. The difference is the plane labeled by changes polarity. Note, no one region needs to be labeled with unlike the labeling from [3]

Note, not all possible labellings are non-empty regions as the largest possible number of regions in for an arrangement of hyperplanes is:

Which is if , but when is strictly less than

. This is when the regions are in general position, that is we can change the values of all the normal vectors and offsets by a small amount and the number of regions will not change. The initial layer of a perceptron will most likely be in general position. We call an index

trivial if .

3.2 Weighted Unions and Selection Layers

Layers of the neural network after the first layer will amount to a weighted union of the sets associated to the previous layer. The second layer is a weighted union of the positive sides of the polarized hyperplanes generated by the first layer. This results in each node being equivalent to a union of the regions of the arrangement. Thus we call a layer a selection layer if it is not the first layer. We start with a set level definition of what each node in a selection layer:

Definition 5.

A weighted union of subsets of , , with weights and offset is

The characteristic function is defined on

.

It is clear that this is just taking the output of a node and converting it back to the associated set. In order to manipulate a selection node we need a clear understanding of the weighted union. For a single set , we can take the compliment of by the weighted union: . We can also take the union and intersection of two or more sets:

The following lemma demonstrates some of the limitations of the weighted union:

Lemma 3.1.

A weighted union of sets can be written as a union a subset of all intersections of those sets and their complements.

Proof.

Let be a weighted union. For each ,

therefore without loss of generality we may assume for all , , else replace with , add to and replace with . Let be the adjusted offset For Let

The map

is order preserving as all are non-negative. Let be the obvious shorthand for , it is easy to show:

Definition 6.

For a collection of sets , and for each the we get a region of the

The indexing of the region by is the standard indexing

We can see from the proof that if we want to find out what set operations are possible with the weighted union we can restrict ourselves to a union of some intersections of our sets. This can be seen through the light of a -ary logical operation that is strictly composed out of a series of ‘or’s on a collection of ‘and’s. This cannot produce all -ary boolean statements [4].

The proof provides a way to translate the weights and offset to a map on the poset . However the mapping is very dependent on the signs of the original weights. Given two different weighted unions onto the same space we not only have a and a , but we also have two different sign corrections. This means that it’s more difficult to compare two selection nodes using this technique. We can however remove the sign correction for the polarization after we have determined the which intersections were selected.

To standardize the polarization we define the following. For any map we may define a self map where if and or if and . is clearly a bijection for all . For a collection of sets and each we get a different polarization of all possible intersections of the sets and their complements. The polarization of the regions of is equivalent to the standard polarization over where and .

For the weights and offset the polarization of the regions is dependent on such that if and if . Therefore the polarization of the regions in a weighted union, , is over the polarization of all regions. To convert our polarization to be over the standard polarization we may take the image, .

Definition 7.

A selection of regions of a collection of sets indexed by to be the union of the regions:

3.3 Characterization of a Perceptron

We now will characterize a perceptron as a series of weighted unions on top of a polarized arrangement of hyper planes. Figure 2 shows the various subnetworks in a neural network with 2 inputs, 4 planar nodes in the first layer, 2 selection nodes in the second layer and a selection node in the output layer.

Figure 2: A neural network with 4 plane nodes, 2 selection nodes and an output, which is also a selection node. For each node we have highlighted the area where the associated neural subnetwork is positive in grey.
Lemma 3.2.

A union or intersection of two selections of a plane partition set is another selection of that plane partition set.

Theorem 3.3.

A binary classification neural network is an indicator function on a union of some regions of an arrangement of hyperplanes , where is defined by the first layer.

Proof.

We will induct on the hidden height of a neural network. For the inductive step let be a neural network of height . Each node in the final hidden layer of the neural network is a neural network of height , let be the selection of regions for the th node. Let the final node have weight from the th node with offset :

As the weighted union is a finite number of intersections and unions on its the inputs and the inputs are all selections of the same regions by lemma 3.2 the weighted union is. ∎

We can see that if every selection in the penultimate layer pairs two regions i.e. they are both selected for or selected against together then the final selection node cannot separate the two.

We can see that a neural network with outputs is going to be selections on the regions for the arrangement of hyperplanes. However, they are not independent from each other. The selections made by the last hidden layer affect the final possible selection.

References

  • [1] Yoshua Bengio, Nicolas L Roux, Pascal Vincent, Olivier Delalleau, and Patrice Marcotte. Convex neural networks. In Advances in neural information processing systems, pages 123–130, 2005.
  • [2] C. K. Chui and H. N. Mhaskar. Deep nets for local manifold learning. ArXiv e-prints, July 2016.
  • [3] Paul H. Edelman. A partial order on the regions of dissected by hyperplanes. Transactions of the American Mathematical Society, 283(2):617–631, 1984.
  • [4] H.B. Enderton. A Mathematical Introduction to Logic. Harcourt/Academic Press, 2001.
  • [5] K. Funahashi. On the approximate realization of continuous mappings by neural networks. Neural Netw., 2(3):183–192, May 1989.
  • [6] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006.
  • [7] A. N. Kolmogorov. On the representation of continuous functions of several variables as superpositions of continuous functions of one variable and addition. Proceedings of the USSR Academy of Sciences, 114:369–373, 1957.
  • [8] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
  • [9] S. Meiser. Point location in arrangements of hyperplanes. Information and Computation, 106(2):286 – 303, 1993.
  • [10] Ezra Miller. Geometric combinatorics. American Mathematical Society Institute for Advanced Study, Providence, R.I. Princeton, N.J, 2007.
  • [11] U. Shaham, A. Cloninger, and R. R. Coifman. Provable approximation properties for deep neural networks. ArXiv e-prints, September 2015.
  • [12] David A Sprecher. On the structure of continuous functions of several variables. Transactions of the American Mathematical Society, 115:340–355, 1965.