
Answers to Imamura Note on the Definition of Neutrosophic Logic
In order to more accurately situate and fit the neutrosophic logic into ...
11/24/2018 ∙ by Florentin Smarandache, et al. ∙ 0 ∙ shareread it

Neural Logic Rule Layers
Despite their great success in recent years, deep neural networks (DNN) ...
07/01/2019 ∙ by Jan Niclas Reimann, et al. ∙ 12 ∙ shareread it

EndtoEnd DNN Training with Block Floating Point Arithmetic
DNNs are ubiquitous datacenter workloads, requiring orders of magnitude ...
04/04/2018 ∙ by Mario Drumond, et al. ∙ 0 ∙ shareread it

Applying Boolean discrete methods in the production of a realvalued probabilistic programming model
In this paper we explore the application of some notable Boolean methods...
02/18/2016 ∙ by Jonathan Darren Nix, et al. ∙ 0 ∙ shareread it

Unit Dependency Graph and its Application to Arithmetic Word Problem Solving
Math word problems provide a natural abstraction to a range of natural l...
12/03/2016 ∙ by Subhro Roy, et al. ∙ 0 ∙ shareread it

Using Propagation for Solving Complex Arithmetic Constraints
Solving a system of nonlinear inequalities is an important problem for w...
09/11/2003 ∙ by M. H. van Emden, et al. ∙ 0 ∙ shareread it

Deep Neural Networks for DataDriven Turbulence Models
In this work, we present a novel databased approach to turbulence model...
06/10/2018 ∙ by Andrea D. Beck, et al. ∙ 0 ∙ shareread it
Neural Arithmetic Logic Units
Neural networks can learn to represent and manipulate numerical information, but they seldom generalize well outside of the range of numerical values encountered during training. To encourage more systematic numerical extrapolation, we propose an architecture that represents numerical quantities as linear activations which are manipulated using primitive arithmetic operators, controlled by learned gates. We call this module a neural arithmetic logic unit (NALU), by analogy to the arithmetic logic unit in traditional processors. Experiments show that NALUenhanced neural networks can learn to track time, perform arithmetic over images of numbers, translate numerical language into realvalued scalars, execute computer code, and count objects in images. In contrast to conventional architectures, we obtain substantially better generalization both inside and outside of the range of numerical values encountered during training, often extrapolating orders of magnitude beyond trained numerical ranges.
READ FULL TEXT