Training and Generating Neural Networks in Compressed Weight Space

12/31/2021
by   Kazuki Irie, et al.
4

The inputs and/or outputs of some neural nets are weight matrices of other neural nets. Indirect encodings or end-to-end compression of weight matrices could help to scale such approaches. Our goal is to open a discussion on this topic, starting with recurrent neural networks for character-level language modelling whose weight matrices are encoded by the discrete cosine transform. Our fast weight version thereof uses a recurrent neural network to parameterise the compressed weights. We present experimental results on the enwik8 dataset.

READ FULL TEXT
research
07/29/2017

Orthogonal Recurrent Neural Networks with Scaled Cayley Transform

Recurrent Neural Networks (RNNs) are designed to handle sequential data ...
research
12/28/2012

A Frequency-Domain Encoding for Neuroevolution

Neuroevolution has yet to scale up to complex reinforcement learning tas...
research
08/01/2017

End-to-End Neural Segmental Models for Speech Recognition

Segmental models are an alternative to frame-based models for sequence p...
research
10/06/2022

A Step Towards Uncovering The Structure of Multistable Neural Networks

We study the structure of multistable recurrent neural networks. The act...
research
03/09/2016

Recursive Recurrent Nets with Attention Modeling for OCR in the Wild

We present recursive recurrent neural networks with attention modeling (...
research
02/11/2018

Understanding Recurrent Neural State Using Memory Signatures

We demonstrate a network visualization technique to analyze the recurren...
research
10/04/2018

Learning Compressed Transforms with Low Displacement Rank

The low displacement rank (LDR) framework for structured matrices repres...

Please sign up or login with your details

Forgot password? Click here to reset