Learning representations by forward-propagating errors

08/17/2023
by   Ryoungwoo Jang, et al.
0

Back-propagation (BP) is widely used learning algorithm for neural network optimization. However, BP requires enormous computation cost and is too slow to train in central processing unit (CPU). Therefore current neural network optimizaiton is performed in graphical processing unit (GPU) with compute unified device architecture (CUDA) programming. In this paper, we propose a light, fast learning algorithm on CPU that is fast as CUDA acceleration on GPU. This algorithm is based on forward-propagating method, using concept of dual number in algebraic geometry.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2019

CUDA optimized Neural Network predicts blood glucose control from quantified joint mobility and anthropometrics

Neural network training entails heavy computation with obvious bottlenec...
research
02/17/2023

Towards Efficient Alternating Current Optimal Power Flow Analysis on Graphical Processing Units

We present a solution of sparse alternating current optimal power flow (...
research
05/30/2015

Recognition of convolutional neural network based on CUDA Technology

For the problem whether Graphic Processing Unit(GPU),the stream processo...
research
08/18/2023

Tensor-Compressed Back-Propagation-Free Training for (Physics-Informed) Neural Networks

Backward propagation (BP) is widely used to compute the gradients in neu...
research
09/04/2017

GPU-Accelerated Parallel Finite-Difference Time-Domain Method for Electromagnetic Waves Propagation in Unmagnetized Plasma Media

The finite-difference time-domain (FDTD) method has been commonly utiliz...
research
02/13/2019

Two-Dimensional Batch Linear Programming on the GPU

This paper presents a novel, high-performance, graphical processing unit...

Please sign up or login with your details

Forgot password? Click here to reset