Brain Programming is Immune to Adversarial Attacks: Towards Accurate and Robust Image Classification using Symbolic Learning

03/01/2021
by   Gerardo Ibarra-Vazquez, et al.
28

In recent years, the security concerns about the vulnerability of Deep Convolutional Neural Networks (DCNN) to Adversarial Attacks (AA) in the form of small modifications to the input image almost invisible to human vision make their predictions untrustworthy. Therefore, it is necessary to provide robustness to adversarial examples in addition to an accurate score when developing a new classifier. In this work, we perform a comparative study of the effects of AA on the complex problem of art media categorization, which involves a sophisticated analysis of features to classify a fine collection of artworks. We tested a prevailing bag of visual words approach from computer vision, four state-of-the-art DCNN models (AlexNet, VGG, ResNet, ResNet101), and the Brain Programming (BP) algorithm. In this study, we analyze the algorithms' performance using accuracy. Besides, we use the accuracy ratio between adversarial examples and clean images to measure robustness. Moreover, we propose a statistical analysis of each classifier's predictions' confidence to corroborate the results. We confirm that BP predictions' change was below 2% using adversarial examples computed with the fast gradient sign method. Also, considering the multiple pixel attack, BP obtained four out of seven classes without changes and the rest with a maximum error of 4% in the predictions. Finally, BP also gets four categories using adversarial patches without changes and for the remaining three classes with a variation of 1%. Additionally, the statistical analysis showed that the predictions' confidence of BP were not significantly different for each pair of clean and perturbed images in every experiment. These results prove BP's robustness against adversarial examples compared to DCNN and handcrafted features methods, whose performance on the art media classification was compromised with the proposed perturbations.

READ FULL TEXT

page 26

page 27

page 28

page 35

page 36

page 37

page 38

research
11/05/2018

FUNN: Flexible Unsupervised Neural Network

Deep neural networks have demonstrated high accuracy in image classifica...
research
03/19/2020

Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates

To deflect adversarial attacks, a range of "certified" classifiers have ...
research
08/05/2020

One word at a time: adversarial attacks on retrieval models

Adversarial examples, generated by applying small perturbations to input...
research
10/14/2021

Interactive Analysis of CNN Robustness

While convolutional neural networks (CNNs) have found wide adoption as s...
research
06/01/2020

Adversarial Attacks on Classifiers for Eye-based User Modelling

An ever-growing body of work has demonstrated the rich information conte...
research
10/03/2020

A Deep Genetic Programming based Methodology for Art Media Classification Robust to Adversarial Perturbations

Art Media Classification problem is a current research area that has att...
research
09/12/2023

Adversarial Attacks Assessment of Salient Object Detection via Symbolic Learning

Machine learning is at the center of mainstream technology and outperfor...

Please sign up or login with your details

Forgot password? Click here to reset