Fast calculation of correlations in recognition systems

03/06/2016
by   Pavel Dourbal, et al.
0

Computationally efficient classification system architecture is proposed. It utilizes fast tensor-vector multiplication algorithm to apply linear operators upon input signals . The approach is applicable to wide variety of recognition system architectures ranging from single stage matched filter bank classifiers to complex neural networks with unlimited number of hidden layers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/14/2021

Fast Walsh-Hadamard Transform and Smooth-Thresholding Based Binary Layers in Deep Neural Networks

In this paper, we propose a novel layer based on fast Walsh-Hadamard tra...
research
08/19/2019

A Computational Model for Tensor Core Units

To respond to the need of efficient training and inference of deep neura...
research
02/23/2023

FG-SSA: Features Gradient-based Signals Selection Algorithm of Linear Complexity for Convolutional Neural Networks

Recently, many convolutional neural networks (CNNs) for classification b...
research
12/31/2018

Accurate, Data-Efficient, Unconstrained Text Recognition with Convolutional Neural Networks

Unconstrained text recognition is an important computer vision task, fea...
research
06/12/2020

Self-organization of multi-layer spiking neural networks

Living neural networks in our brains autonomously self-organize into lar...
research
10/19/2018

CNN inference acceleration using dictionary of centroids

It is well known that multiplication operations in convolutional layers ...
research
04/02/2017

Identifying networks with common organizational principles

Many complex systems can be represented as networks, and the problem of ...

Please sign up or login with your details

Forgot password? Click here to reset