neos: End-to-End-Optimised Summary Statistics for High Energy Physics

03/10/2022
by   Nathan Simpson, et al.
0

The advent of deep learning has yielded powerful tools to automatically compute gradients of computations. This is because training a neural network equates to iteratively updating its parameters using gradient descent to find the minimum of a loss function. Deep learning is then a subset of a broader paradigm; a workflow with free parameters that is end-to-end optimisable, provided one can keep track of the gradients all the way through. This work introduces neos: an example implementation following this paradigm of a fully differentiable high-energy physics workflow, capable of optimising a learnable summary statistic with respect to the expected sensitivity of an analysis. Doing this results in an optimisation process that is aware of the modelling and treatment of systematic uncertainties.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2021

Gradient Descent in Materio

Deep learning, a multi-layered neural network approach inspired by the b...
research
12/08/2020

Convergence Rates for Multi-classs Logistic Regression Near Minimum

Training a neural network is typically done via variations of gradient d...
research
03/05/2018

Energy-entropy competition and the effectiveness of stochastic gradient descent in machine learning

Finding parameters that minimise a loss function is at the core of many ...
research
05/31/2022

Differentiable programming for functional connectomics

Mapping the functional connectome has the potential to uncover key insig...
research
10/07/2015

Efficient Per-Example Gradient Computations

This technical report describes an efficient technique for computing the...

Please sign up or login with your details

Forgot password? Click here to reset