Adaptive Spatial Steganography Based on Probability-Controlled Adversarial Examples

04/08/2018
by   Sai Ma, et al.
0

Deep learning model is vulnerable to adversarial attack, which generates special input sample to the deep learning model that can make the model misclassify the sample. Besides deep learning model, adversarial attack is also effective to feature-based machine learning model. In this paper, we discuss the application of adversarial attack for improving the capability to resist steganalysis of steganographic schemes. We apply the steganalytic neural network as the adversarial generator. Our goal is to improve the performance of typical spatial adaptive steganography towards the steganalysis of rich model feature and neural network. The adversarial method can be combined with adaptive steganography by controlling the flipping directions of pixels following the gradient map. The steganographic adversarial example can make itself "seems like" innocent cover towards steganalyzer. However, the generated steganographic adversarial examples are only effective to deceive the steganalyzer trained with non-adversarial examples. When facing the steganalyzer trained with adversarial examples, they can be easily detected with a low detecting error rate. The adversarial method makes the generated stego image more distinguishable from the cover image. To improve this situation, we adjust the method to calculate the gradient map by modifying the probability vector of softmax layer to a particular vector rather than modifying the category vector. Therefore, the generated adversarial example is controlled by probability output of softmax. With the adjustment, the adversarial scheme performs better than the typical adaptive steganography. We develop an practical adversarial steganographic method with double-layered STC. The experiment proves its effectiveness on rich model and neural network.

READ FULL TEXT

page 1

page 5

page 6

page 9

page 10

research
07/11/2020

ManiGen: A Manifold Aided Black-box Generator of Adversarial Examples

Machine learning models, especially neural network (NN) classifiers, hav...
research
12/27/2021

Adversarial Attack for Asynchronous Event-based Data

Deep neural networks (DNNs) are vulnerable to adversarial examples that ...
research
07/30/2019

Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding

Detecting adversarial examples currently stands as one of the biggest ch...
research
05/10/2021

Adversarial examples attack based on random warm restart mechanism and improved Nesterov momentum

The deep learning algorithm has achieved great success in the field of c...
research
11/21/2018

How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples

Even before deep learning architectures became the de facto models for c...
research
12/31/2022

Tracing the Origin of Adversarial Attack for Forensic Investigation and Deterrence

Deep neural networks are vulnerable to adversarial attacks. In this pape...
research
09/30/2019

Re-learning of Child Model for Misclassified data by using KL Divergence in AffectNet: A Database for Facial Expression

AffectNet contains more than 1,000,000 facial images which manually anno...

Please sign up or login with your details

Forgot password? Click here to reset