BCNN: A Binary CNN with All Matrix Ops Quantized to 1 Bit Precision

10/01/2020
by   Arthur J. Redfern, et al.
0

This paper describes a CNN where all CNN style 2D convolution operations that lower to matrix matrix multiplication are fully binary. The network is derived from a common building block structure that is consistent with a constructive proof outline showing that binary neural networks are universal function approximators. 68.96 with a 2 step training procedure and implementation strategies optimized for binary operands are provided.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2022

Fast matrix multiplication for binary and ternary CNNs on ARM CPU

Low-bit quantized neural networks are of great interest in practical app...
research
01/02/2019

Optimizing Bit-Serial Matrix Multiplication for Reconfigurable Computing

Matrix-matrix multiplication is a key computational kernel for numerous ...
research
06/30/2022

MatPIM: Accelerating Matrix Operations with Memristive Stateful Logic

The emerging memristive Memory Processing Unit (mMPU) overcomes the memo...
research
01/29/2022

A Novel Matrix-Encoding Method for Privacy-Preserving Neural Networks (Inference)

In this work, we present , a novel matrix-encoding method that is partic...
research
10/10/2020

Training Binary Neural Networks through Learning with Noisy Supervision

This paper formalizes the binarization operations over neural networks f...
research
04/03/2023

Optimizing data-flow in Binary Neural Networks

Binary Neural Networks (BNNs) can significantly accelerate the inference...
research
06/21/2018

Generic and Universal Parallel Matrix Summation with a Flexible Compression Goal for Xilinx FPGAs

Bit matrix compression is a highly relevant operation in computer arithm...

Please sign up or login with your details

Forgot password? Click here to reset