Distributed Training and Optimization Of Neural Networks

12/03/2020
by   Jean-Roch Vlimant, et al.
0

Deep learning models are yielding increasingly better performances thanks to multiple factors. To be successful, model may have large number of parameters or complex architectures and be trained on large dataset. This leads to large requirements on computing resource and turn around time, even more so when hyper-parameter optimization is done (e.g search over model architectures). While this is a challenge that goes beyond particle physics, we review the various ways to do the necessary computations in parallel, and put it in the context of high energy physics.

READ FULL TEXT
research
05/19/2021

Physics Validation of Novel Convolutional 2D Architectures for Speeding Up High Energy Physics Simulations

The precise simulation of particle transport through detectors remains a...
research
07/09/2020

Physics Letters B publications from 1967 to 2020. An analysis of the WEB page content of PLB

Having been an editor for Physics Letters B (PLB) for many years I becam...
research
07/11/2023

Fast Neural Network Inference on FPGAs for Triggering on Long-Lived Particles at Colliders

Experimental particle physics demands a sophisticated trigger and acquis...
research
12/22/2019

Efficient Parameter Sampling for Neural Network Construction

The customizable nature of deep learning models have allowed them to be ...
research
05/05/2018

Exploring Hyper-Parameter Optimization for Neural Machine Translation on GPU Architectures

Neural machine translation (NMT) has been accelerated by deep learning n...
research
01/28/2010

Computing Networks: A General Framework to Contrast Neural and Swarm Cognitions

This paper presents the Computing Networks (CNs) framework. CNs are used...
research
02/23/2021

Twelve Ways To Fool The Masses When Giving Parallel-In-Time Results

Getting good speedup – let alone high parallel efficiency – for parallel...

Please sign up or login with your details

Forgot password? Click here to reset