Compact Neural Networks based on the Multiscale Entanglement Renormalization Ansatz

11/09/2017
by   Andrew Hallam, et al.
0

The goal of this paper is to demonstrate a method for tensorizing neural networks based upon an efficient way of approximating scale invariant quantum states, the Multi-scale Entanglement Renormalization Ansatz (MERA). We employ MERA as a replacement for linear layers in a neural network and test this implementation on the CIFAR-10 dataset. The proposed method outperforms factorization using tensor trains, providing greater compression for the same level of accuracy and greater accuracy for the same level of compression. We demonstrate MERA-layers with 3900 times fewer parameters and a reduction in accuracy of less than 1

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/08/2021

Entangled Datasets for Quantum Machine Learning

High-quality, large-scale datasets have played a crucial role in the dev...
research
08/27/2019

Learning Algebraic Models of Quantum Entanglement

We give a thorough overview of supervised learning and network design fo...
research
10/13/2022

Quantification of entanglement with Siamese convolutional neural networks

Quantum entanglement is a fundamental property commonly used in various ...
research
05/06/2019

On the Entanglement Cost of One-Shot Compression

We revisit the task of compressing an ensemble of quantum states in the ...
research
05/25/2018

Tensorized Spectrum Preserving Compression for Neural Networks

Modern neural networks can have tens of millions of parameters, and are ...
research
01/22/2020

A Multi-Scale Tensor Network Architecture for Classification and Regression

We present an algorithm for supervised learning using tensor networks, e...
research
05/29/2020

Deep convolutional tensor network

Tensor networks are linear algebraic representations of quantum many-bod...

Please sign up or login with your details

Forgot password? Click here to reset