Robustness of deep learning algorithms in astronomy – galaxy morphology studies

11/01/2021
by   A. Ćiprijanović, et al.
0

Deep learning models are being increasingly adopted in wide array of scientific domains, especially to handle high-dimensionality and volume of the scientific data. However, these models tend to be brittle due to their complexity and overparametrization, especially to the inadvertent adversarial perturbations that can appear due to common image processing such as compression or blurring that are often seen with real scientific data. It is crucial to understand this brittleness and develop models robust to these adversarial perturbations. To this end, we study the effect of observational noise from the exposure time, as well as the worst case scenario of a one-pixel attack as a proxy for compression or telescope errors on performance of ResNet18 trained to distinguish between galaxies of different morphologies in LSST mock data. We also explore how domain adaptation techniques can help improve model robustness in case of this type of naturally occurring attacks and help scientists build more trustworthy and stable models.

READ FULL TEXT

page 3

page 4

research
12/28/2021

DeepAdversaries: Examining the Robustness of Deep Learning Models for Galaxy Morphology Classification

Data processing and analysis pipelines in cosmological survey experiment...
research
02/28/2021

Adversarial Information Bottleneck

The information bottleneck (IB) principle has been adopted to explain de...
research
12/01/2021

Adv-4-Adv: Thwarting Changing Adversarial Perturbations via Adversarial Domain Adaptation

Whereas adversarial training can be useful against specific adversarial ...
research
01/01/2023

ExploreADV: Towards exploratory attack for Neural Networks

Although deep learning has made remarkable progress in processing variou...
research
04/06/2021

Exploring Targeted Universal Adversarial Perturbations to End-to-end ASR Models

Although end-to-end automatic speech recognition (e2e ASR) models are wi...
research
11/16/2020

Adversarially Robust Classification based on GLRT

Machine learning models are vulnerable to adversarial attacks that can o...
research
11/29/2022

Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning

Deep learning methods have gained increased attention in various applica...

Please sign up or login with your details

Forgot password? Click here to reset