A Deep Genetic Programming based Methodology for Art Media Classification Robust to Adversarial Perturbations

10/03/2020
by   Gustavo Olague, et al.
0

Art Media Classification problem is a current research area that has attracted attention due to the complex extraction and analysis of features of high-value art pieces. The perception of the attributes can not be subjective, as humans sometimes follow a biased interpretation of artworks while ensuring automated observation's trustworthiness. Machine Learning has outperformed many areas through its learning process of artificial feature extraction from images instead of designing handcrafted feature detectors. However, a major concern related to its reliability has brought attention because, with small perturbations made intentionally in the input image (adversarial attack), its prediction can be completely changed. In this manner, we foresee two ways of approaching the situation: (1) solve the problem of adversarial attacks in current neural networks methodologies, or (2) propose a different approach that can challenge deep learning without the effects of adversarial attacks. The first one has not been solved yet, and adversarial attacks have become even more complex to defend. Therefore, this work presents a Deep Genetic Programming method, called Brain Programming, that competes with deep learning and studies the transferability of adversarial attacks using two artworks databases made by art experts. The results show that the Brain Programming method preserves its performance in comparison with AlexNet, making it robust to these perturbations and competing to the performance of Deep Learning.

READ FULL TEXT
research
09/12/2023

Adversarial Attacks Assessment of Salient Object Detection via Symbolic Learning

Machine learning is at the center of mainstream technology and outperfor...
research
08/11/2022

Diverse Generative Adversarial Perturbations on Attention Space for Transferable Adversarial Attacks

Adversarial attacks with improved transferability - the ability of an ad...
research
11/19/2020

Adversarial Threats to DeepFake Detection: A Practical Perspective

Facially manipulated images and videos or DeepFakes can be used maliciou...
research
04/17/2020

Adversarial Attack on Deep Learning-Based Splice Localization

Regarding image forensics, researchers have proposed various approaches ...
research
04/27/2020

Transferable Perturbations of Deep Feature Distributions

Almost all current adversarial attacks of CNN classifiers rely on inform...
research
03/01/2021

Brain Programming is Immune to Adversarial Attacks: Towards Accurate and Robust Image Classification using Symbolic Learning

In recent years, the security concerns about the vulnerability of Deep C...

Please sign up or login with your details

Forgot password? Click here to reset