BCNN: A Binary CNN with All Matrix Ops Quantized to 1 Bit Precision

10/01/2020
by   Arthur J. Redfern, et al.
0

This paper describes a CNN where all CNN style 2D convolution operations that lower to matrix matrix multiplication are fully binary. The network is derived from a common building block structure that is consistent with a constructive proof outline showing that binary neural networks are universal function approximators. 68.96 with a 2 step training procedure and implementation strategies optimized for binary operands are provided.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/18/2022

Fast matrix multiplication for binary and ternary CNNs on ARM CPU

Low-bit quantized neural networks are of great interest in practical app...
01/02/2019

Optimizing Bit-Serial Matrix Multiplication for Reconfigurable Computing

Matrix-matrix multiplication is a key computational kernel for numerous ...
06/22/2018

BISMO: A Scalable Bit-Serial Matrix Multiplication Overlay for Reconfigurable Computing

Matrix-matrix multiplication is a key computational kernel for numerous ...
01/29/2022

A Novel Matrix-Encoding Method for Privacy-Preserving Neural Networks (Inference)

In this work, we present , a novel matrix-encoding method that is partic...
06/21/2018

Generic and Universal Parallel Matrix Summation with a Flexible Compression Goal for Xilinx FPGAs

Bit matrix compression is a highly relevant operation in computer arithm...
10/10/2020

Training Binary Neural Networks through Learning with Noisy Supervision

This paper formalizes the binarization operations over neural networks f...
01/27/2022

HYPERLOCK: In-Memory Hyperdimensional Encryption in Memristor Crossbar Array

We present a novel cryptography architecture based on memristor crossbar...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.