Gluing Neural Networks Symbolically Through Hyperdimensional Computing

05/31/2022
by   Peter Sutor, et al.
0

Hyperdimensional Computing affords simple, yet powerful operations to create long Hyperdimensional Vectors (hypervectors) that can efficiently encode information, be used for learning, and are dynamic enough to be modified on the fly. In this paper, we explore the notion of using binary hypervectors to directly encode the final, classifying output signals of neural networks in order to fuse differing networks together at the symbolic level. This allows multiple neural networks to work together to solve a problem, with little additional overhead. Output signals just before classification are encoded as hypervectors and bundled together through consensus summation to train a classification hypervector. This process can be performed iteratively and even on single neural networks by instead making a consensus of multiple classification hypervectors. We find that this outperforms the state of the art, or is on a par with it, while using very little overhead, as hypervector operations are extremely fast and efficient in comparison to the neural networks. This consensus process can learn online and even grow or lose models in real time. Hypervectors act as memories that can be stored, and even further bundled together over time, affording life long learning capabilities. Additionally, this consensus structure inherits the benefits of Hyperdimensional Computing, without sacrificing the performance of modern Machine Learning. This technique can be extrapolated to virtually any neural model, and requires little modification to employ - one simply requires recording the output signals of networks when presented with a testing example.

READ FULL TEXT

page 1

page 3

page 5

research
06/07/2021

Application of neural networks to classification of data of the TUS orbital telescope

We employ neural networks for classification of data of the TUS fluoresc...
research
08/25/2014

Convolutional Neural Networks for Sentence Classification

We report on a series of experiments with convolutional neural networks ...
research
06/02/2021

Learning to Time-Decode in Spiking Neural Networks Through the Information Bottleneck

One of the key challenges in training Spiking Neural Networks (SNNs) is ...
research
01/10/2020

RMWPaxos: Fault-Tolerant In-Place Consensus Sequences

Building consensus sequences based on distributed, fault-tolerant consen...
research
10/06/2009

BRAINSTORMING: Consensus Learning in Practice

We present here an introduction to Brainstorming approach, that was rece...
research
09/09/2021

3D Real-Time Supercomputer Monitoring

Supercomputers are complex systems producing vast quantities of performa...
research
07/26/2018

Premise selection with neural networks and distributed representation of features

We present the problem of selecting relevant premises for a proof of a g...

Please sign up or login with your details

Forgot password? Click here to reset