Adaptive Spatial Steganography Based on Probability-Controlled Adversarial Examples

04/08/2018
by   Sai Ma, et al.
0

Deep learning model is vulnerable to adversarial attack, which generates special input sample to the deep learning model that can make the model misclassify the sample. Besides deep learning model, adversarial attack is also effective to feature-based machine learning model. In this paper, we discuss the application of adversarial attack for improving the capability to resist steganalysis of steganographic schemes. We apply the steganalytic neural network as the adversarial generator. Our goal is to improve the performance of typical spatial adaptive steganography towards the steganalysis of rich model feature and neural network. The adversarial method can be combined with adaptive steganography by controlling the flipping directions of pixels following the gradient map. The steganographic adversarial example can make itself "seems like" innocent cover towards steganalyzer. However, the generated steganographic adversarial examples are only effective to deceive the steganalyzer trained with non-adversarial examples. When facing the steganalyzer trained with adversarial examples, they can be easily detected with a low detecting error rate. The adversarial method makes the generated stego image more distinguishable from the cover image. To improve this situation, we adjust the method to calculate the gradient map by modifying the probability vector of softmax layer to a particular vector rather than modifying the category vector. Therefore, the generated adversarial example is controlled by probability output of softmax. With the adjustment, the adversarial scheme performs better than the typical adaptive steganography. We develop an practical adversarial steganographic method with double-layered STC. The experiment proves its effectiveness on rich model and neural network.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset