DeepAI
Log In Sign Up

PixelSteganalysis: Pixel-wise Hidden Information Removal with Low Visual Degradation

It is difficult to detect and remove secret images that are hidden in natural images using deep-learning algorithms. Our technique is the first work to effectively disable covert communications and transactions that use deep-learning steganography. We address the problem by exploiting sophisticated pixel distributions and edge areas of images using a deep neural network. Based on the given information, we adaptively remove secret information at the pixel level. We also introduce a new quantitative metric called destruction rate since the decoding method of deep-learning steganography is approximate (lossy), which is different from conventional steganography. We evaluate our technique using three public benchmarks in comparison with conventional steganalysis methods and show that the decoding rate improves by 10 20

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

page 5

page 6

page 7

01/30/2019

PixelSteganalysis: Destroying Hidden Information with a Low Degree of Visual Degradation

Steganography is the science of unnoticeably concealing a secret message...
06/05/2021

PEEL: A Provable Removal Attack on Deep Hiding

Deep hiding, embedding images into another using deep neural networks, h...
12/02/2017

Fruit recognition from images using deep learning

In this paper we introduce a new, high-quality, dataset of images contai...
02/18/2021

Deep Neural Networks based Invisible Steganography for Audio-into-Image Algorithm

In the last few years, steganography has attracted increasing attention ...
07/23/2018

Invisible Steganography via Generative Adversarial Networks

Nowadays, there are plenty of works introducing convolutional neural net...
08/15/2019

Resolving challenges in deep learning-based analyses of histopathological images using explanation methods

Deep learning has recently gained popularity in digital pathology due to...

1 Introduction

Steganography is the science of unnoticeably concealing a secret message within a plain cover image, to covertly send messages to an intended recipient [Johnson and Jajodia1998a]. When a secret message is hidden in a cover image, the output is called a stego image. Steganalysis is the detection or removal of a secret message in a stego image [Johnson and Jajodia1998b].

With an upsurge of big data on the internet, the threat of unauthorized and unlimited information transaction and display has arisen. Steganography has been used by international terrorist organizations, companies, and the military for covert communication [Sharma and Gupta2012]. It is also being used to steal company confidential information [King2018].

Figure 1: How steganography works and steganalysis can disturb. The destroyed secret image shows that our method, PixelSteganalysis, disrupts a covert transmission between the sender and the receiver with an imperceptible difference on the stego image.

In the process of covertly embedding a secret message into a cover image, the original cover image should be marginally altered to the stego image [Johnson and Jajodia1998a]. To avoid statistical and visual detection, the payload of the secret message is small in conventional steganography. Many conventional steganographic methods embed secret messages within the least significant bits (LSBs) of the cover image [Pevnỳ et al.2010, Holub and Fridrich2012]. Hence, the secret messages hidden by conventional steganography could be removed by relatively simple steganalysis methods such as JPEG compression [Fridrich et al.2002] and noise reduction [Gou et al.2007]. For conventional steganography, both texts and images are applicable as the secret message. The reason is that the decoding method in conventional steganography is lossless so it can well retain the meaning of the text.

Deep-learning has shown great performance in various fields. Recently, the field of deep-learning steganography has experienced rapid development [Yang et al.2018, Wu et al.2018, Husien and Badi2015]. Currently proposed deep-learning steganography disperses representations of secret images across all available bits [Baluja2017], not restricted to LSBs. The payload of a secret message encoded by a deep-learning method is comparatively large. Since the decoding methods are approximate (lossy) in deep-learning steganography, secret messages are limited to the image form.

Depending on privilege levels, steganalysis can be categorized as passive or active [Amritha et al.2016]. Passive steganalysis algorithms aim to determine whether an image contains a secret message or not. Most passive algorithms look for features associated with a particular steganography technique (non-blind). On the other hand, active steganalysis algorithms, also called stego firewalls [Voloshynovskiy et al.2002], have the privilege to modify images so that the secret message can no longer be recovered [Amritha et al.2016]. However, the images coming out of active steganalysis need to appear as nearly unchanged as possible because not all images are stego images. That is, a good active steganalysis technique should aim to remove a secret message as much as possible with a low degree of visual degradation on a stego image. The full scheme of steganography and active steganalysis is depicted in Fig. 1.

The deep-learning steganographic methods are not easily detectable by conventional passive steganalysis since the deep-learning methods are an attempt to maintain the pixel distribution of an image to the greatest extent [Hayes and Danezis2017, Dong et al.2018]. Furthermore, the robustness of stego images generated by the deep-learning methods against conventional active steganalysis is demonstrated in Yu yu2018integrated.

We propose a new way of removing the secret image as much as possible with minimal change to the appearance of the image. The contributions of our work are as follows:

  • [leftmargin=5.5mm, labelindent=7.5mm,labelsep=3.3mm]

  • To the best of our knowledge, this is the first method that is effective in removing a secret image encoded by steganography methods based on deep-learning. Experimentally, we show that our approach outperforms conventional active steganalysis on stego images produced by deep-learning as well as conventional steganography.

  • We propose a novel structure consisting of an analyzer and eraser. The analyzer is trained in an end-to-end manner. From it, we extract the distribution of each pixel and the edge areas of the image via a single neural network. The eraser is used to adaptively remove the suspicious traces pixel by pixel using the analyzed results from Analyzer.

  • The decoding method of conventional steganography is mostly exact (lossless). However, the decoding method of the recent deep-learning steganography is approximate (lossy). We propose a new evaluation metric called the destruction rate, which is required to calculate the performance of steganalysis against deep-learning steganography accurately.

  • We conducted various combinations of experiments between steganography and steganalysis. Specifically, we also evaluated the performance of our method on stego images produced by conventional steganography with a high payload, not just deep-learning steganography. Our method showed better results in this case compared to widely used active steganalysis.

We supply more resultant samples and graphs for the various datasets in an anonymous link: https://anonymous-steganalysis.github.io/. Please, check the link for additional figures and examples, and information about settings.

2 Background

Figure 2: Three examples from Deep Steganography, ISGAN, and LSB insertion, respectively. To clearly show the residual patterns of each steganographic method, we apply 20 times of original residual images (x20).

2.1 Conventional Steganography

The least significant bit (LSB) [Johnson and Jajodia1998a, Mielikainen2006] is the most conventional steganographic algorithm. Recent studies have also proposed other approaches that maintain image statistics or design special distortion functions [Holub et al.2014, Lerch-Hostalot and Megías2016]. Pevny et al. pevny2010using proposed a distortion function domain that takes into account the cost of changing every pixel. Holub and Fridrich holub2012designing encoded a secret message in accordance with the statistically analyzed textural relevance of image areas. We hide a grayscale secret image in a color cover image using the most basic LSB insertion method [Johnson and Jajodia1998a] in order to test whether our method is also effective in conventional steganography. We experimentally showed that our method outperforms the conventional steganalysis methods, considering both the visual degradation as well as the removal of the secret image.

2.2 Deep-Learning Steganography

Unlike conventional steganography methods, deep-learning based methods can hide more messages. Recently, many deep-learning based steganography methods have been suggested. These are based on either convolutional neural networks (CNNs) 

[Yang et al.2018, Wu et al.2018, Husien and Badi2015, Pibre et al.2016] or generative adversarial networks (GANs) [Volkhonskiy et al.2017, Hayes and Danezis2017, Volkhonskiy et al.2017, Shi et al.2017]. Among these, we consider two state-of-the-art methods on each architecture: Deep Steganography [Baluja2017] and invisible steganography via GAN (ISGAN) [Dong et al.2018].

Deep Steganography is one of the state-of-the-art deep-learning based steganography algorithms. It hides a color image to a color cover image. The architecture consists of a prep-network, hiding-network (encoder) and reveal-network (decoder). As shown in the first row of Fig. 2, the stego image and the decoded secret image look very similar to the cover image and the secret image, respectively. However, the background color of the stego image is slightly reddish. We can observe that the diagonal grid pattern is distributed all over the background of the residual images.

ISGAN consists of an encoder, a decoder, and a discriminator. ISGAN considers only gray images as possible secret images. ISGAN is different from other methods in that it uses only the Y channel among the YCrCb channels of the cover image to hide the gray secret image. ISGAN uses structural similarity index values (SSIM) instead of simple pixel difference values for the reconstruction errors in order to produce more naturally looking stego images shown in the second row of the Fig. 2. However, we can notice that the overall residual values are large, particularly at the edge areas. The illumination of the decoded secret image is noticeably deviated from the original secret image.

2.3 Conventional Active Steganalysis

There are various approaches of active steganalysis against the conventional steganography methods. One of the most basic ways is to take LSB planes of the stego image and flip the bits [Ettinger1998]. Also, another commonly-used strategy for removing the secret message is to overwrite the random bits suggested by the Gaussian noise or others where the message may be residing [Fridrich et al.2002]. There is also a methodology that uses a denoising method, assuming that the secret image is the noise added to the cover image. For example, a median filter [Gou et al.2007]

is a handy tool for removing noise in an image, especially the sporadic noise of large variance. The application of deconvolution for restoration after applying the denoising filter is also a main approach. The secret message added is expected to be erased during the processes of denoising and restoring. Wiener restoration is a representative method for conventional active steganalysis 

[Amritha et al.2016] .

According to Lafferty lafferty2008obfuscation, randomization maintained the good image quality and comparably high removal ability for the spatial steganography algorithms. The median filter and wiener restoration operated quite optimally for the frequency domain steganography algorithms. Thus, we chose these three active steganalysis methods for comparison with our work.

3 Proposed Methods

Figure 3: Model overview. The analyzer receives the stego image x as input and produces an edge distribution and a pixel distributions . The analyzer

consists of convolutional neural networks (CNN) with residual connections and fully connected (FC) layers. The

is the activation of the last FC layer. The of the is reshaped into the edge distribution , and the and of the are transformed into the pixel distributions . The analyzer is trained to minimize the sum of an image loss and an edge loss. The eraser receives the generated and , and remove suspicious traces pixel by pixel using the given information. As a result, our proposed method, PixelSteganalysis, produces a purified stego image . The decoded secret images, and , from the original stego image x and from the purified stego image , respectively show that our method is effective in removing the secret image as well as maintaining the original quality of the image x.

Considering the intrinsic characteristics of the deep-learning steganography methods, we propose an active steganalysis method, PixelSteganalysis. The proposed method requires neither knowledge of the utilized steganography method nor the distribution of the original cover image (blind), for removing a hidden image with minimum perceptual degradation or even perceptual improvement of the stego image.

The candidate input images are not limited to grayscale. However, for easier visualization and explanation, we assumed grayscale input images in the following sections.

As described in Fig. 3, our method consists of analyzer and eraser. The analyzer takes the stego image as input and outputs an edge distribution and pixel distribution of the given image. The generated distributions are used to remove a secret image hidden in a cover image in the eraser.

A potential stego image is called and the image size, , represents the height and width of the image. We obtain a set of the distribution of all pixels, , and the edge areas, , using the deep-learning architecture (Sec. 3.1). On the basis of the pixel distributions and the data-driven condition , we visit the pixels one by one. We check the presence of stego information, and adjust the pixel value under some constraints to remove it (Sec. 3.2).

3.1 Analyzer

The best scenario from the perspective of steganalysis is that both the cover and stego images are accessible. However, it is commonly impractical. Thus, we suggest an approach for removing the secret image by adjusting the pixel value of the suspicious regions where the secret image may be hidden through the pixel level information.

The goal of the analyzer is to obtain a set of the distribution of all pixels and the edge areas of the image from the neural network trained with a dataset having similar distribution as the original cover images. The basic structure of the analyzer comprises convolutional neural networks (CNN) with residual connections and fully connected (FC) layers. The CNN architecture of the analyzer is inspired by PixelCNN++, where pixel-level conditional distributions can be obtained [Salimans et al.2017].

To train the analyzer, we minimize the sum of an image loss and an edge loss:

(1)

where and

are hyperparameters to balance the strength of the two loss terms.

The image loss, , is the negative log-likelihood of the image obtained by product of the conditional distribution of each pixel:

(2)

where represents an each pixel of a training image .

The edge loss, , is the mean-squared error between the results obtained using a conventional edge detector and those obtained using our neural network:

(3)

where represents an each pixel of a training image , represents the edge distribution from the conventional edge detector, and represents the predicted edge distribution. There are many approaches to detect edge areas. In our method, we use the Prewitt operator [Prewitt1970], which is less sensitive to noise than other zero-based methods such as Canny canny1986computational.

As described in Fig. 3, the activation of the last FC layer of the analyzer is named as h, and consists of and s. By the trained parameters with a dataset with properties similar to the original cover image, we obtain a discretized logistic mixture likelihood for all pixels , where K is a pixel depth dimension. This procedure is represented as transformer in Fig. 3. Since the operation of the transformer is based on Salimans salimans2017pixelcnn++, check out the paper for more details.

The vector

of the h is reshaped into . The reshaped results are named the edge distribution . The reason for measuring the edge distribution is that a large amount of secret information is hidden in edge areas of the cover image in some cases of conventional steganography and all cases of deep-learning steganography. The edge distribution is used to determine the considering range of suspicious information in the eraser. In an ablation study 5.2, we experimentally showed the positive effect of an edge detection.

Figure 4: Three examples of how our method and the commonly-used conventional method, Gaussian noise, differs in the efficiency at the same PSNR. In each example, the two images on the top right represent the stego image modified with Gaussian noise and the secret image decoded from that stego image, respectively. In the same manner, the two images at the bottom right are the stego image modified with our method and its decoded secret image, respectively. The decoded rate of our method is lower than the decoded rate of Gaussian noise.

3.2 Eraser

With the eraser, the attempt is to move the distribution of the potential stego image,

, towards the distribution of its original cover image. We substitute the pixel value of the suspicious regions with the neighboring pixel value of the highest probability. Consequentially, we aim to find an purified stego image,

, that maximizes under some conditions.

We control the considering range of pixel modification, and the range is decided by two factors: hyperparameter and the predicted edge distribution .

Using the values of the edge distribution, we calculate the adaptive range of modification as shown below:

(4)

where is the maximum edge value and is a hyperparameter that represents an allowed degree of the least modification. is the allowed degree of modification at most, which is . We recommend to keep greater than or equal to 1 since the difference also exists on the non-edge areas.

The pixel value of the stego image is not much deviated from the corresponding pixel value of the cover image. Based on the calculated , we set up the adaptive range of the considering pixel values:

(5)

For every pixel, the pixel distribution should be re-extracted because the pixel value of the image is sequentially modified. We use the re-extracted pixel distribution to replace each pixel with the one taking the highest probability value within the allowed neighboring pixels:

(6)

However, re-extracting the pixel distribution for all pixels requires a long time to modify a single image. To decrease the runtime, we propose an approximation of Eq. 6. By the approximation, the pixel distribution is only extracted before the iteration and we can keep utilizing the distribution for subsequent measurement of the highest probability value within the adaptive range. The approximation will be formed similar to:

(7)

for every pixel. This leads to much faster runtime but slightly decreases the quality of the results. The comparison between the original case and the approximated case is presented in the Sec. 5.2.

4 Destruction Rate: Proposed Evaluation Metric

To measure the degree of destruction of the secret image, we propose a new evaluation metric to assess the performance of the active steganalysis against both conventional and deep-learning steganography. We adopt the SSIM and the peak signal to noise ratio (PSNR) to measure the quality of the purified image. The PSNR is a basic metric to measure the quality of the modified image compared to the original image (The larger the PSNR, the higher in quality).

For measuring the degree of destruction, there was a condition that was guaranteed in conventional steganography to keep the recovery of the secret message lossless, therefore, we could use the decoded rate [Amritha et al.2016, Wu et al.2018]. The decoded rate is calculated by

(8)

where and represent the original secret image and the decoded secret image, respectively, and and represent the height and width, respectively.

After the deep-learning methods are proposed, these methods allowed a relaxation on this guarantee. Thus, if we only use the decoded rate to measure the performance of the active steganalysis against the deep-learning steganography methods, the performance has a dependency on the performance of a trained decoding algorithm. In other words, the decoded rate between the original secret image and the unmodified decoded secret image is already approximately 90 [Baluja2017, Dong et al.2018] in all the recent deep-learning based studies and its value differs by how well the decoding algorithm is trained. Thus, we suggest a new evaluation metric called the destruction rate. The destruction rate is defined by

(9)

where and represent the decoded secret image and the decoded secret image after a steganography removal algorithm is applied, respectively. In order to be independent on the performance of the decoding algorithm, the base image is changed into the unmodified decoded secret image instead of the original secret image. We believe that this metric is more reliable for representing the pure degree of destruction of each active steganalysis method.

5 Experiments

5.1 Performance of Removing Hidden Images

We make use of three datasets, CIFAR-10, BOSS1.0.1, and ImageNet for the evaluation. We compared our method with three commonly-used conventional steganalysis methods, Gaussian noise, Denoising, and Restoration. Our proposed method showed up to a 20

improvement over the decoded rate. We have experimented with values up to 8.

We used the decoded rate (DC) for benchmarking with previous works. To demonstrate precise performance, we proposed the new evaluation metric, destruction rate (DT), and show an example of why we should consider the evaluation metric. Moreover, the DT values are provided in detail in the supplementary S3.

The decoding method of deep-learning steganography is lossy. Thus, it is difficult to accurately evaluate the applied active steganalysis when using only the DC. For example, as shown in Fig. 5, the DC of the decoded secret image at the top is 0.91, however, we can still see the original information. The secret image is not removed at all. On the contrary, there is a case where the decoded rate is 0.86, however, the secret image is hard to be interpreted. Therefore, the DT is more suitable as the evaluation metric, considering the recently proposed deep-learning steganography methods.

Figure 5: Top: the failure of removal shows almost zero value of DT. Bottom: the success of removal shows relatively high value of DT.

As shown in Fig. 4 and Fig. 6, our method exhibits the best performance, when PSNR and DC are considered. In the case of Gaussian noise, if exceeds 4, the stego image becomes perceptually blurred. Therefore, it is difficult to use more than 4. The denoising and restoration methods are irrelevant to the . In the case of denoising, the DC is much lower than that in our method, but PSNR is very low compared to the other steganography methods. In the case of restoration, the value of PSNR is similar to that in our method (even better at = 8 at LSB). However, the DC does not drop very much. For example, in the case of ISGAN, considering that the DC of ISGAN is originally 0.9, the DC did not drop at all. The visual samples are presented in the supplementary S9, S11, and S12. In addition, we experimented with a LSB insertion approach. As shown in Fig. 6, our method not only outperforms in the deep-learning steganoraphy methods but also in the LSB. Figures of PSNR and DC for other datasets are provided in the supplementary S5 and S6. The figures show a similar trend to Fig. 6.

In the case of active steganalysis, we assume the privilege that can apply modifications to all the scanned images. We therefore experimented image degradation when applying our method to the innocuous cover images. There was some degree of degradation, but the change was hardly visible up to = 4, and the PSNR of our method is higher than the comparing conventional steganalysis methods with values up to 4. Details can be found in the supplementary S4.

5.2 Ablation Studies

No Edge Detection The residual between the cover image and stego image are much larger on the edge areas while the residual values are comparably small on the non-edge (low frequency) areas as shown in Fig. 2. This is because the alteration of the edge areas is statistically and perceptually less suspicious than that of the non-edge areas. For ablation studies, we test the degree of edge detection effectiveness. We proceeded with the experiment using CIFAR-10. We can observe that especially at = 1 and 2, the effect of the edge detection as a guide is large. Detailed analysis of edge areas is provided in the supplementary S7 and S8. The results are presented in detail in the supplementary S9.

No Approximation The results on Fig. 6 are from an approximated version. The original version takes an average of min to modify a single CIFAR-10 image. However, the approximated version takes less than ms to finish a single image. We compare the decoded rates of the two versions using stego images generated by Deep Steganography. Results show that the average decoded rate of the original version is and that of the approximated version is both at . The difference is as small as .

Figure 6: Experimental results of our work and comparison of steganalysis methods when using the CIFAR-10 dataset. The higher the PSNR is, the better the preservation of the original cover image is. The lower the decoded rate (DC) is, the better the scrubbing degree of the hidden image is.

6 Conclusion

From the perspective of passive steganalysis, we need to create a detector targeted at a certain deep-learning steganography method whenever a new method is proposed. In this way, however, it is not easy to prevent the newly proposed steganography. The more deep-learning steganography methods develop, the active steganalysis methods should improve more to appropriately inhibit them beforehand. We hope that our work can be a good starting point for new development.

The goal of active steganalysis is to completely remove the secret image and change the stego image into the original cover image. We call this as stego scrubbing [Moskowitz et al.2007]. In practice, this is very difficult. The development of active steganalysis should aim to produce the purified stego image that are more similar to the cover image than the original stego image. As shown in Fig. 6, our method produces the purified stego images closer to the cover images than the original stego images when is 2.

References

  • [Amritha et al.2016] PP Amritha, M Sethumadhavan, and R Krishnan. On the removal of steganographic content from images. Defence Science Journal, 66(6):574–581, 2016.
  • [Baluja2017] Shumeet Baluja. Hiding images in plain sight: Deep steganography. In Advances in Neural Information Processing Systems, pages 2069–2079, 2017.
  • [Canny1986] John Canny. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence, (6):679–698, 1986.
  • [Dong et al.2018] Shiqi Dong, Ru Zhang, and Jianyi Liu. Invisible steganography via generative adversarial network. arXiv preprint arXiv:1807.08571, 2018.
  • [Ettinger1998] J Mark Ettinger. Steganalysis and game equilibria. In International Workshop on Information Hiding, pages 319–328. Springer, 1998.
  • [Fridrich et al.2002] Jessica Fridrich, Miroslav Goljan, and Dorin Hogea. Steganalysis of jpeg images: Breaking the f5 algorithm. In International Workshop on Information Hiding, pages 310–323. Springer, 2002.
  • [Gou et al.2007] Hongmei Gou, Ashwin Swaminathan, and Min Wu. Noise features for image tampering detection and steganalysis. In 2007 IEEE International Conference on Image Processing, volume 6, pages VI–97. IEEE, 2007.
  • [Hayes and Danezis2017] Jamie Hayes and George Danezis. Generating steganographic images via adversarial training. In Advances in Neural Information Processing Systems, pages 1954–1963, 2017.
  • [Holub and Fridrich2012] Vojtech Holub and Jessica J Fridrich. Designing steganographic distortion using directional filters. In WIFS, pages 234–239, 2012.
  • [Holub et al.2014] Vojtěch Holub, Jessica Fridrich, and Tomáš Denemark. Universal distortion function for steganography in an arbitrary domain. EURASIP Journal on Information Security, 2014(1):1, 2014.
  • [Husien and Badi2015] Sabah Husien and Haitham Badi. Artificial neural network for steganography. Neural Computing and Applications, 26(1):111–116, 2015.
  • [Johnson and Jajodia1998a] Neil F Johnson and Sushil Jajodia. Exploring steganography: Seeing the unseen. Computer, 31(2), 1998.
  • [Johnson and Jajodia1998b] Neil F Johnson and Sushil Jajodia. Steganalysis of images created using current steganography software. In International Workshop on Information Hiding, pages 273–289. Springer, 1998.
  • [King2018] Ariana King. Ge engineer tied to china charged with theft of company secrets, Aug 2018.
  • [Lafferty2008] Patricia A Lafferty. Obfuscation and the steganographic active warden model. The Catholic University of America, 2008.
  • [Lerch-Hostalot and Megías2016] Daniel Lerch-Hostalot and David Megías. Unsupervised steganalysis based on artificial training sets.

    Engineering Applications of Artificial Intelligence

    , 50:45–59, 2016.
  • [Mielikainen2006] Jarno Mielikainen. Lsb matching revisited. IEEE signal processing letters, 13(5):285–287, 2006.
  • [Moskowitz et al.2007] Ira S Moskowitz, Patricia A Lafferty, and Farid Ahmed. Stego scrubbing a new direction for image steganography. In Information Assurance and Security Workshop, 2007. IAW’07. IEEE SMC, pages 119–126. IEEE, 2007.
  • [Pevnỳ et al.2010] Tomáš Pevnỳ, Tomáš Filler, and Patrick Bas. Using high-dimensional image models to perform highly undetectable steganography. In International Workshop on Information Hiding, pages 161–177. Springer, 2010.
  • [Pibre et al.2016] Lionel Pibre, P Jerome, Dino Ienco, and Marc Chaumont.

    Deep learning for steganalysis is better than a rich model with an ensemble classifier, and is natively robust to the cover source-mismatch.

    SPIE Media Watermarking, Security, and Forensics, 2016.
  • [Prewitt1970] Judith MS Prewitt. Object enhancement and extraction. Picture processing and Psychopictorics, 10(1):15–19, 1970.
  • [Salimans et al.2017] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017.
  • [Sharma and Gupta2012] Manoj Kumar Sharma and PC Gupta. A comparative study of steganography and watermarking”. International Journal of Research in IT & Management (IJRIM), 2(2):2231–4334, 2012.
  • [Shi et al.2017] Haichao Shi, Jing Dong, Wei Wang, Yinlong Qian, and Xiaoyu Zhang. Ssgan: Secure steganography based on generative adversarial networks. In Pacific Rim Conference on Multimedia, pages 534–544. Springer, 2017.
  • [Volkhonskiy et al.2017] Denis Volkhonskiy, Ivan Nazarov, Boris Borisenko, and Evgeny Burnaev. Steganographic generative adversarial networks. arXiv preprint arXiv:1703.05502, 2017.
  • [Voloshynovskiy et al.2002] Sviatoslav V Voloshynovskiy, Alexander Herrigel, Yuri B Rytsar, and Thierry Pun. Stegowall: Blind statistical detection of hidden data. In Security and Watermarking of Multimedia Contents IV, volume 4675, pages 57–69. International Society for Optics and Photonics, 2002.
  • [Wu et al.2018] Pin Wu, Yang Yang, and Xiaoqiang Li. Stegnet: Mega image steganography capacity with deep convolutional network. arXiv preprint arXiv:1806.06357, 2018.
  • [Yang et al.2018] Jianhua Yang, Kai Liu, Xiangui Kang, Edward K Wong, and Yun-Qing Shi. Spatial image steganography based on generative adversarial network. arXiv preprint arXiv:1804.07939, 2018.
  • [Yu2018] Chong Yu. Integrated steganography and steganalysis with generative adversarial networks. 2018.