Post-Training BatchNorm Recalibration

10/12/2020
by   Gil Shomron, et al.
0

We revisit non-blocking simultaneous multithreading (NB-SMT) introduced previously by Shomron and Weiser (2020). NB-SMT trades accuracy for performance by occasionally "squeezing" more than one thread into a shared multiply-and-accumulate (MAC) unit. However, the method of accommodating more than one thread in a shared MAC unit may contribute noise to the computations, thereby changing the internal statistics of the model. We show that substantial model performance can be recouped by post-training recalibration of the batch normalization layers' running mean and running variance statistics, given the presence of NB-SMT.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/05/2018

OpenMath and SMT-LIB

OpenMath and SMT-LIB are languages with very different origins, but both...
research
04/17/2020

Non-Blocking Simultaneous Multithreading: Embracing the Resiliency of Deep Neural Networks

Deep neural networks (DNNs) are known for their inability to utilize und...
research
05/07/2021

lazybvtoint at the SMT Competition 2020

lazybvtoint is a new prototype SMT-solver, that will participate in the ...
research
06/08/2023

Partitioning Strategies for Distributed SMT Solving

For many users of Satisfiability Modulo Theories (SMT) solvers, the solv...
research
09/01/2018

MS-UEdin Submission to the WMT2018 APE Shared Task: Dual-Source Transformer for Automatic Post-Editing

This paper describes the Microsoft and University of Edinburgh submissio...
research
06/13/2019

Astra Version 1.0: Evaluating Translations from Alloy to SMT-LIB

We present a variety of translation options for converting Alloy to SMT-...
research
10/03/2016

Orthographic Syllable as basic unit for SMT between Related Languages

We explore the use of the orthographic syllable, a variable-length conso...

Please sign up or login with your details

Forgot password? Click here to reset