DeepAdversaries: Examining the Robustness of Deep Learning Models for Galaxy Morphology Classification

Data processing and analysis pipelines in cosmological survey experiments introduce data perturbations that can significantly degrade the performance of deep learning-based models. Given the increased adoption of supervised deep learning methods for processing and analysis of cosmological survey data, the assessment of data perturbation effects and the development of methods that increase model robustness are increasingly important. In the context of morphological classification of galaxies, we study the effects of perturbations in imaging data. In particular, we examine the consequences of using neural networks when training on baseline data and testing on perturbed data. We consider perturbations associated with two primary sources: 1) increased observational noise as represented by higher levels of Poisson noise and 2) data processing noise incurred by steps such as image compression or telescope errors as represented by one-pixel adversarial attacks. We also test the efficacy of domain adaptation techniques in mitigating the perturbation-driven errors. We use classification accuracy, latent space visualizations, and latent space distance to assess model robustness. Without domain adaptation, we find that processing pixel-level errors easily flip the classification into an incorrect class and that higher observational noise makes the model trained on low-noise data unable to classify galaxy morphologies. On the other hand, we show that training with domain adaptation improves model robustness and mitigates the effects of these perturbations, improving the classification accuracy by 23 increases by a factor of  2.3 the latent space distance between the baseline and the incorrectly classified one-pixel perturbed image, making the model more robust to inadvertent perturbations.

READ FULL TEXT

page 6

page 10

page 12

research
11/01/2021

Robustness of deep learning algorithms in astronomy – galaxy morphology studies

Deep learning models are being increasingly adopted in wide array of sci...
research
04/10/2023

Generating Adversarial Attacks in the Latent Space

Adversarial attacks in the input (pixel) space typically incorporate noi...
research
07/07/2020

Dual Mixup Regularized Learning for Adversarial Domain Adaptation

Recent advances on unsupervised domain adaptation (UDA) rely on adversar...
research
02/05/2021

Optimal Transport as a Defense Against Adversarial Attacks

Deep learning classifiers are now known to have flaws in the representat...
research
10/29/2020

Beyond cross-entropy: learning highly separable feature distributions for robust and accurate classification

Deep learning has shown outstanding performance in several applications ...
research
02/09/2019

When Causal Intervention Meets Image Masking and Adversarial Perturbation for Deep Neural Networks

Discovering and exploiting the causality in deep neural networks (DNNs) ...
research
09/03/2021

MitoVis: A Visually-guided Interactive Intelligent System for Neuronal Mitochondria Analysis

Neurons have a polarized structure, including dendrites and axons, and c...

Please sign up or login with your details

Forgot password? Click here to reset