Improving robustness of jet tagging algorithms with adversarial training

03/25/2022
by   Annika Stein, et al.
0

Deep learning is a standard tool in the field of high-energy physics, facilitating considerable sensitivity enhancements for numerous analysis strategies. In particular, in identification of physics objects, such as jet flavor tagging, complex neural network architectures play a major role. However, these methods are reliant on accurate simulations. Mismodeling can lead to non-negligible differences in performance in data that need to be measured and calibrated against. We investigate the classifier response to input data with injected mismodelings and probe the vulnerability of flavor tagging algorithms via application of adversarial attacks. Subsequently, we present an adversarial training strategy that mitigates the impact of such simulated attacks and improves the classifier robustness. We examine the relationship between performance and vulnerability and show that this method constitutes a promising approach to reduce the vulnerability to poor modeling.

READ FULL TEXT
research
03/25/2023

Improving robustness of jet tagging algorithms with adversarial training: exploring the loss surface

In the field of high-energy physics, deep learning algorithms continue t...
research
03/24/2023

Improved Adversarial Training Through Adaptive Instance-wise Loss Smoothing

Deep neural networks can be easily fooled into making incorrect predicti...
research
02/05/2022

Layer-wise Regularized Adversarial Training using Layers Sustainability Analysis (LSA) framework

Deep neural network models are used today in various applications of art...
research
10/21/2022

Evolution of Neural Tangent Kernels under Benign and Adversarial Training

Two key challenges facing modern deep learning are mitigating deep netwo...
research
02/27/2022

A Unified Wasserstein Distributional Robustness Framework for Adversarial Training

It is well-known that deep neural networks (DNNs) are susceptible to adv...
research
03/23/2023

Decentralized Adversarial Training over Graphs

The vulnerability of machine learning models to adversarial attacks has ...
research
12/12/2020

Normalized Label Distribution: Towards Learning Calibrated, Adaptable and Efficient Activation Maps

The vulnerability of models to data aberrations and adversarial attacks ...

Please sign up or login with your details

Forgot password? Click here to reset