Log In Sign Up

LAYERS: Yet another Neural Network toolkit

by   Roberto Paredes, et al.

Layers is an open source neural network toolkit aim at providing an easy way to implement modern neural networks. The main user target are students and to this end layers provides an easy scriptting language that can be early adopted. The user has to focus only on design details as network totpology and parameter tunning.


page 1

page 2

page 3

page 4


Transfer Learning Toolkit: Primers and Benchmarks

The transfer learning toolkit wraps the codes of 17 transfer learning mo...

Lightmorphic Signatures Analysis Toolkit

In this paper we discuss the theory used in the design of an open source...

NeMo: a toolkit for building AI applications using Neural Modules

NeMo (Neural Modules) is a Python framework-agnostic toolkit for creatin...

DyNet: The Dynamic Neural Network Toolkit

We describe DyNet, a toolkit for implementing neural network models base...

Deep Learning Tools for Audacity: Helping Researchers Expand the Artist's Toolkit

We present a software framework that integrates neural networks into the...

NeuroX: A Toolkit for Analyzing Individual Neurons in Neural Networks

We present a toolkit to facilitate the interpretation and understanding ...

LEGOEval: An Open-Source Toolkit for Dialogue System Evaluation via Crowdsourcing

We present LEGOEval, an open-source toolkit that enables researchers to ...

1 Introduction

Layers111 is a neural network toolkit mainly devoted to the academic use. Layers  aims at providing an easy and fast way for the students to apply and test the theoretical concepts learned. With Layers  the student can implement neural netowrks with fully connected and concolutional layers. Moreover Layers  provides different data manipulation functions. One of the main requirements in the design of Layers toolkit was that it should be flexible, easy to use and with an short learning curve. In order to achieve this we propose developing a front-end based on the definition of a specification language of experiments for Layers  tool, and the construction of its associated compiler.

This front-end allows users to define an experiment by means of a program in this specification language. Then this front-end verifies that the program meets the lexical, syntactic and semantic constraints of language, and later generates a certain intermediate code that is eventually interpreted by the Layers tool. In some ways, functions and methods that constitute the Layers toolkit may be considered the back-end that would allow to run the experiment designed by the user.

Other initiatives in order to ease the implementation of NN are for instace Keras222 and Lasagne333, among others. But still these lightweight libraries requiere some skills from the students. On the other hand, programms written in layers language are easy to read and focus mainly on network architechture and parametrization, avoiding any other extra information.

2 Layers toolkit

With Layers

 the students can try from very easy neural networks models, e.g. Multi Layer Perceptron, to more complex models with several output layers, multiple connections, semi-supervised learning, combination of convolutional and plain topologies etc. See for instance figure


Figure 1: An example on Neural Net defined in Layers 

And this is the definition of this network:

network N1 {
  data tr D1 // Data for training

  // Covolutional Input
  CI in [nz=1, nr=28, nc=28]

  C c03 [nk=16, kr=3, kc=3,rpad=1,cpad=1]
  C c05 [nk=16, kr=5, kc=5,rpad=1,cpad=1]
  C c07 [nk=16, kr=7, kc=7,rpad=1,cpad=1]
  CA cat

  MP p0[sizer=2,sizec=2]
  C c1 [nk=32, kr=3, kc=3]
  MP p1 [sizer=2,sizec=2]
  C c2 [nk=32, kr=3, kc=3]
  MP p2 [sizer=2,sizec=2]

  // FC reshape
  F   f0 []
  // FC
  F  f1 [numnodes=128]
  // Outout
  FO out [classification]

  FI fin // Input fully connected
  F mlp1 [numnodes=1024]
  F mlp2 [numnodes=1024]
  F mlp3 [numnodes=1024]

  // links
Figure 2: Example of network definition of figure 1 using Layers 

Layers offers two different cost functions, cross-entropy and squared error, for classification and regression problems respectively. Other functions could be added, Layers is open-source, but with these two cost functions Layers covers a wide range of academic problems.

Layers structure

There are 4 main parts in a Layers program:

  • Constants

  • Data

  • Networks

  • Scripts

Here we describe very basically these blocks, but a much better description can be found in the Layers tutorial

2.1 Constants

In the Constants block the user can specify the value of some constants that are used along the Layers process, batch size, log file and number of threads.

2.2 Data

In the Data block the user specify the data objects that can be later linked to networks. These data objects are defined using an associated data file and format (ascii or binary). The data objects have some atrributes and operations that can be acessed in the script block.

2.3 Networks

This part is the most important block where the user specify the type of each element (layers) of the neural network and the links among these basic elements, the topology. The elements of the neural networks are mainly two kind of layers: fully connected or convolutional. In this sense we can defined input layers that are fully connected or convolutional, max-pooling layers or cat layers, among others.

Layers have some restrictions in order to define the topology, for instance, a convolutional layer can have more than one child layer but only one parent layer. But apart from these natural restrictions Layers provide enough flexibility to define the network topology as can be seen in figure 2.

2.4 Scripts

In the Script block the user can modify the default values of the different objects: data and layers. Morevoer in the script block the user can run fucntions associated to the objects, e.g. can normalize data, run a training for a network, save a network etc.

3 A front-end for Layers 

In the previous Section 2 we have presented the most significant features of the Layers tool as well as their main alternative uses. As it mentioned above, one of the main motivations for the creation of the Layers toolkit is that it should be easy to use, and it should have a learning effort as low as possible. In order to do this, we propose here a complete front-end for Layers toolkit.

This front-end is composed by a specification language of experiments in Layers toolkit, and its associated compiler.

3.1 Layers specification language

The Layers language is a simple specification language for proper management of this toolkit. A Layers program defines an experiment or set of experiments and consists of four main sections: definition of general constants, data and networks, and description of scripts.

In order to define the Layers specification language we introduce below the lexical conventions, and the syntactic-semantic constraints of Layers .

3.1.1 Lexical conventions of Layers 

Lexical conventions of Layers language could be summarized as:

  1. The keywords are used to point out the actions, operations and general constants. Also they used to define the different parameters characterizing the data, the networks or the layers. All keywords are reserved, and must be written in lowercase. Below we show the complete list of keywords.

  2. Special symbols are the following: { } [ ] . , = ->

  3. Identifiers (), unsigned numerical constants (), and complete paths to files () are symbols (tokens) whose lexical constraints can be defined by the following regular expressions:

    {letter}({letter} | {digit})*
    "([^\0 ])+"

    Where a file path can be any character enclosed between quotes, except the characters null and blank. Lower and uppercase letters are distinct.

  4. A comment starts with a double slash (//)  and ends with a newline, and they can be placed anywhere white space can appear. Comments may not be nested.

  5. White space consists of blanks, newlines, and tabs. White space is ignored except that it must separate ’s, ’s, ’s, and keywords.

3.1.2 Syntax and semantics of Layers 

A program in Layers specification language consists of an optional definition of general constants, followed by a sequence of definitions of data, networks and scripts, in any order.



General constants have the following default values: size of the batch for the network (), number of threads for parallelization (), and log file where some messages are saved ().

Data section defines the corpora to be used in experiments. For its management in the program, data must be associated with internal variable names. Data can be read in ascii () or binary () format, being possible to use their full path.


The definition of a network is composed in turn of 3 main parts: definition of the used data, the selected layers, and connections defined between these layers.


For each network it is necessary to explicitly define the data sets it uses: training data set (), which is mandatory, and validation () and test () data, which are both optional. When test or validation data are provided the error function of the network will be also evaluated for that data sets. Data identifiers () must be previously defined in a data section.

As described in the previous Section 2, in the Layers toolkit are defined the following types of layers:

  • Input Fully Connected layer (). The FI layer has no parameters, just serve as an interface with the input data. The number of units of the layer coincides with the dimensionality of the representation of the input data.

  • Input Covolutional layer (). The CI layer has three mandatory parameters that indicate how the raw data have to be mapped into an input map: number of channels (), number of image rows (), and number of image cols ().

    The CI layer also has three optional parameters: () crop rows, and () crop cols. In case and parameters are not defined, they take the default values of the parameters and respectively.

  • Fully Connected layer (). The F layer has only one mandatory parameter: number of nodes ().

  • Ouput layer (). The FO layer has only one mandatory parameter indicating the criterion for treating the cost error: cross-entropy () , or mean squared error ().

    Additionally, for the

    criterion an optional parameter of autoencoder (

    ) can be defined.

  • Convolutional layer (). The C layer has three mandatory parameters that indicate number of kernels (), height of kernel (), and width of kernel ()

    The C layer also has three optional parameters: to indicate that the method is done in rows, to indicate that the method is done in cols, and stride value. The default values are respectively , and .

  • MaxPooling layer (). The MP layer has two mandatory parameter: () height of the pooling region, and () width of the pooling region.

  • Cat layer (). The CA does not require any parameter.

The connection between layers is defined by means the operator (->). Both the source layer as the target layer can be defined by the simple name of the layer, when there is no ambiguity, or by the network name followed by a period (.) and followed by the name of the layer.

The definition of scripts is composed of a sequence of actions of two different types:  for modifying some parameter of networks, layers or data, and    for defining some operations on networks, layers or or data.


Whether for a particular layer or for all layers of a network we can modify some of their parameters. For layers and networks, the integer parameters that can currently be modified they are:

  • , batch normalization (


  • , activation (0 Linear, 1 Relu, 2 Sigmoid, 3 ELU)

  • , to flip input images ()

  • , to shift randomly input images

and real type parameters are:

  • , learning rate

  • , momentum rate

  • , l2 regularization (weight decay)

  • , l1 regularization

  • , maxnorm regularization

  • , dropout

  • , noise ratio after activation function

  • , standard deviation of noise (


  • , ratio of binary noise (only for input layer)

  • , ratio to modify randomly the total brightness of an image

  • , ratio to modify randomly the contrast of an image

  • , to scale the cost factor of an output layer

For data, we can only modify the following (integer) parameter:

  • , for balancing data classes

In Layers we defined a set of functions can be applied to networks, layers, or data.


For networks the following functions are defined:

,  to train a network with a specified number of epochs;

,  to test a particular network; ,  to save the parameters of a network to a particular file; ,  to load the parameters of a network from a particular file; and  to dump the output of all the test data to a particular file.

For layers only the next function is defined: ,  to save the parameters of a layer to a particular file.

For data the following functions are defined: ,  to normalize data; ,  to convert RGB maps to YUV maps;  to center data (mean); and  to divide all the data by a specific value.

Finally we also define a function that allows us to train together a list of previously defined networks, specifying the number of epochs and the number of batches.

3.2 A compiler for Layers 

From the lexical specification, and the syntactic-semantic specification we have developed a complete compiler [1, 3] for the specification language Layers . The front-end of this compiler consists of two main modules. The first one is a scanner to check whether the input program complies with lexical restrictions defined in Section 3.1.1. This scanner has been implemented using a standard GNU tool for automatic generation of lexical analyzers: Flex [4].

The second module is a parser to check whether the input program complies with syntactic-semantic constraints defined in Section 3.1.2. This parser has been implemented using a standard GNU tool for automatic generation of parsers: Bison [2].

This front-end is completed with a set of functions representing the semantic actions necessary to produce intermediate code interpretable by the Layers toolkit (the back-end of compiler).


  • [1] Alfred V. Aho, Monica S. Lam, Ravi Sethi, and Jeffrey D. Ullman. Compilers: Principles, Techniques, and Tools (2nd Edition). Addison Wesley, 2008.
  • [2] GNU Bison. A parser generator., 2014.
  • [3] Keith Cooper and Linda Torczon. Engineering a Compiler. Morgan Kaufman, 2012.
  • [4] Flex. The fast lexical analyzer., 2008.