TS-RNN: Text Steganalysis Based on Recurrent Neural Networks

05/30/2019 ∙ by Zhongliang Yang, et al. ∙ Tsinghua University 0

With the rapid development of natural language processing technologies, more and more text steganographic methods based on automatic text generation technology have appeared in recent years. These models use the powerful self-learning and feature extraction ability of the neural networks to learn the feature expression of massive normal texts. Then they can automatically generate dense steganographic texts which conform to such statistical distribution based on the learned statistical patterns. In this paper, we observe that the conditional probability distribution of each word in the automatically generated steganographic texts will be distorted after embedded with hidden information. We use Recurrent Neural Networks (RNNs) to extract these feature distribution differences and then classify those features into cover text and stego text categories. Experimental results show that the proposed model can achieve high detection accuracy. Besides, the proposed model can even make use of the subtle differences of the feature distribution of texts to estimate the amount of hidden information embedded in the generated steganographic text.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

As concluded by Claude E. Shannon [shannon1949communication], there are three main types of information security models in cyberspace: encryption system, privacy system, and concealment system. The concealment system is different from other two systems. It mainly protects information security by embedding the secret information into the common carrier, hidding the existence of confidential information to achieve the purpose of not being easily suspected and attacked [Simmons1984The]. For information hidding, media with various forms, including image [fridrich2009steganography, chen2019defining], audio [yang2017sudoku, tian2015optimal], text [xiang2014research, xiang2018linguistic, xiang2017novel], can be adopted as carrier. As the most common and frequently used information interaction carrier in people’s daily lives, using text as carrier to hide and transmit secret information has very important value and significance. Therefore, text steganography has attracted wide attention from researchers in recent years[Luo2016Text, Luo2017Text, yang2018rits, yang2018rnn, yang2018markov].

In recent years, with the rapid development of deep learning in the field of Natural Language Processing, plenty of works related to high-quality readable text generation have appeared, including neural machine translation

[Bahdanau2014Neural], dialogue systems [Shang2015Neural], and image caption [yang2017image]. Based on these works, growing interests have been directed to text steganography by cover synthesis [Luo2017Text, fang2017generating, yang2018rnn, yang2018rits]. These methods utilize the powerful feature extraction capabilities of neural networks, analyze the statistical feature distribution of a large number of training texts, and then reconstruct samples that conform to such statistical distribution. Furthermore, based on the generated texts format, there are two different types of methods: natural text steganography [fang2017generating, yang2018rnn, yang2018rits] and special format text steganography [Luo2017Text, Luo2016Text]. The difference is that in the process of generating steganographic texts, in addition to using the learned statistical language model, the special format text steganographic methods also combine some syntactic rules.

Generally, text steganalysis models mainly attempt to analyze and extract some text features first, and then they analyze the differences between these features before and after steganography to determine whether the text contains secret information. Traditional text steganalysis models mainly rely on some simple text features extracted artificially, such as word frequency [Yang2010Linguistic]. However, the features they extracted and analyzed were very simple, which made them difficult to deal with the latest steganographic text generation methods based on neural networks. Recently, some researchers have tried to analyze the high-level semantic relationship between words in texts to determine whether the text contains hidden information[yang2018ts, wen2019convolutional, 8653856].

In this paper, we first analysis that the conditional probability distribution of each word in the automatically generated steganographic texts will be distorted after embedded with hidden information. Then we propose a new text steganalysis mothod based on Recurrent Neural Networks (RNNs), which can extract the conditional probability features of each word in texts. Based on these features, we can then achieve satisfactory steganalysis performance and can even estimate the amount of hidden information contained in the text.

Ii TS-RNN Methodology

For a text of length , we can model it as a sequence signal with length , that is, , where represents the -th word. We hypothesize that once we embed hidden information in the text generation process, it is equivalent to superimposing noise on these conditional probability distributions, which will inevitably affect the probability distribution of the entire text, namely:

(1)

Here, represents the generated steganographic text and represents the -th word in it. represents the disturbance caused by the embedded information on the conditional probability distribution of the -th word. Therefore, our core idea is to realize the recognition of steganographic text by analyzing the difference in statistical distribution of texts caused by embedded hidden information.

The biggest characteristics of Recurrent Neural Network (RNN) is the feedback connections which makes it very suitable for modeling sequential signals. To avoid gradient vanish problem [Hochreiter1998The]

, we usually use Long Short-Term Memory (LSTM)

[Hochreiter1997Long] as the hidden units. An LSTM unit can be described using the following formulas:

(2)

For simplicity, we denote the transfer function of LSTM units by .

For feature extraction, we first map each word to a dense semantic space with a dimension of , that is . Then, for each sentence , we can illustrate it with a matrix , where the -th row indicates the -th word in sentence and is the length of it, that is

(3)

In general, a recurrent neural network consists multiple network layers, each of which has multiple LSTM units. We use to indicate the LSTM units number of -th hidden layer , so the units of -th layer can be represented as

The input of each unit at time step is the weighted sum of the elements in , then the output value of at time step is

(4)

Where and are learned weights and biases, respectively.

Previous works have shown that within a certain range, the more layers of neural network in space, the stronger the ability to extract and express features [Zeiler2013Visualizing]. So we stack the network with multiple layers of LSTM units and the transfer matrix between -th layer and -th layer can be represented as a matrix , that is

(5)

And the output of the -th layer at time step is

(6)

According to Equation , the output of the -th hidden layer at time step can be regarded as a summary of all previous words .

Format bpw [meng2009linguistic] [samanta2016real] [din2015performance] TS-RNN TS-BiRNN
Metric Acc P R Acc P R Acc P R Acc P R Acc P R
FW FL 1 0.521 0.597 0.518 0.768 0.775 0.768 0.724 0.706 0.766 0.874 0.854 0.902 0.873 0.877 0.868
2 0.600 0.671 0.588 0.868 0.877 0.868 0.843 0.840 0.848 0.968 0.974 0.962 0.964 0.966 0.962
3 0.681 0.747 0.659 0.884 0.896 0.884 0.895 0.896 0.894 0.966 0.964 0.968 0.971 0.970 0.972
4 0.705 0.769 0.681 0.872 0.885 0.861 0.899 0.892 0.908 0.989 0.986 0.992 0.993 0.992 0.994
5 0.798 0.858 0.765 0.868 0.893 0.864 0.933 0.930 0.936 0.988 0.992 0.984 0.990 0.992 0.988
EL 1 0.515 0.598 0.513 0.775 0.775 0.775 0.750 0.778 0.700 0.751 0.752 0.747 0.778 0.793 0.752
2 0.592 0.671 0.579 0.917 0.956 0.897 0.904 0.902 0.906 0.980 0.982 0.978 0.986 0.986 0.986
3 0.675 0.742 0.654 0.918 0.924 0.918 0.944 0.932 0.958 0.990 0.996 0.984 0.992 0.992 0.992
4 0.712 0.772 0.689 0.915 0.948 0.915 0.961 0.964 0.958 0.997 0.996 0.998 1.000 1.000 1.000
5 0.819 0.869 0.789 0.911 0.923 0.901 0.970 0.974 0.966 0.998 0.998 0.998 0.996 0.996 0.996
SW FL 1 0.539 0.616 0.533 0.666 0.667 0.666 0.710 0.703 0.726 0.699 0.726 0.637 0.739 0.744 0.727
2 0.619 0.691 0.604 0.898 0.901 0.898 0.918 0.927 0.908 0.970 0.961 0.980 0.973 0.963 0.984
3 0.691 0.762 0.667 0.928 0.931 0.941 0.954 0.960 0.948 0.979 0.974 0.984 0.981 0.984 0.978
4 0.730 0.796 0.703 0.925 0.930 0.898 0.973 0.972 0.974 0.990 0.994 0.986 0.994 0.996 0.992
5 0.810 0.865 0.779 0.916 0.927 0.913 0.985 0.986 0.984 0.992 0.994 0.990 0.997 0.996 0.998
EL 1 0.523 0.624 0.519 0.656 0.656 0.656 0.675 0.659 0.724 0.695 0.709 0.659 0.722 0.736 0.691
2 0.589 0.682 0.575 0.926 0.927 0.926 0.940 0.956 0.922 0.985 0.980 0.990 0.987 0.980 0.994
3 0.651 0.721 0.633 0.942 0.942 0.942 0.950 0.957 0.942 0.991 0.990 0.992 0.994 0.990 0.998
4 0.696 0.754 0.675 0.958 0.959 0.958 0.978 0.982 0.974 0.999 0.998 1.000 1.000 1.000 1.000
5 0.793 0.829 0.774 0.954 0.957 0.952 0.984 0.984 0.984 0.998 0.998 0.998 0.994 0.996 0.992
TABLE I: Results of different steganalysis methods on special format text set.

In order to further extract the potential correlations of each word to all surrounding words (including the previous words and the following words), we further added a reverse RNN, which has been shown in Figure 1. The forward RNN is more inclined to extract the correlation between each word and the previous words, while the reverse RNN is mainly focused on extracting the correlation between each word and the following words.

Fig. 1:

We use bidirectional recurrent neural network (BiRNN) to extract the potential correlation of each word to all surrounding words, and then we use these extracted correlation features to classify whether the original text has hidden data.

We can use and

to represent the correlation features of the words extracted by the forward RNN and the backward RNN, respectively. In order to fuse these two features, we first take out the feature vector of their last moment and splice them together. The spliced vector is named as

:

(7)

Then we define a Feature Fusion Matrix to fuse the features, where is the dimension of the fused feature vector , where:

(8)

Finally, following previous works [yang2019real, yang2018clinical], we pass the feature vector through the softmax classifier and the output is:

(9)

The output value reflects the probability that our model believes that the text contains covert information. We can then set a detection threshold and then the final detection result can be expressed as

(10)

In other words, the model tries to predict the label (0 for normal, 1 for stego) for a given text.

In the process of training, we update network parameters by applying backpropagation algorithm, and the loss function of the whole network consists of two parts, one is the error term and the other is the regularization term, which can be described as:

(11)

where is the batch size of texts. represents the probability that the -th sample is judged to contain covert information, is the actual label of the -th sample. In order to strengthen the regularization and prevent overfitting, we adopt the dropout mechanism [krizhevsky2012imagenet].

Method [meng2009linguistic] [samanta2016real] [din2015performance] TS-RNN TS-BiRNN
Format bpw Acc P R Acc P R Acc P R Acc P R Acc P R
News 1 0.532 0.517 0.382 0.763 0.739 0.812 0.840 0.869 0.801 0.911 0.914 0.909 0.915 0.915 0.914
2 0.513 0.535 0.204 0.786 0.762 0.832 0.835 0.867 0.791 0.917 0.937 0.895 0.924 0.919 0.931
3 0.597 0.679 0.367 0.824 0.767 0.931 0.897 0.909 0.882 0.965 0.955 0.975 0.968 0.958 0.979
4 0.755 0.831 0.640 0.859 0.797 0.962 0.938 0.962 0.911 0.972 0.972 0.972 0.974 0.967 0.982
5 0.847 0.918 0.761 0.881 0.829 0.959 0.961 0.976 0.945 0.991 0.988 0.994 0.990 0.987 0.994
IMDB 1 0.577 0.642 0.345 0.767 0.779 0.744 0.787 0.829 0.722 0.906 0.951 0.856 0.910 0.960 0.855
2 0.713 0.807 0.560 0.849 0.934 0.871 0.869 0.911 0.818 0.964 0.982 0.946 0.963 0.980 0.946
3 0.840 0.925 0.741 0.900 0.877 0.931 0.916 0.944 0.885 0.970 0.981 0.959 0.973 0.984 0.962
4 0.909 0.969 0.845 0.937 0.905 0.975 0.962 0.975 0.947 0.989 0.992 0.987 0.992 0.996 0.989
5 0.909 0.989 0.828 0.929 0.921 0.940 0.977 0.987 0.966 0.996 0.998 0.994 0.996 0.998 0.994
Twitter 1 0.538 0.520 0.387 0.654 0.652 0.658 0.665 0.664 0.670 0.801 0.836 0.749 0.791 0.806 0.767
2 0.544 0.523 0.399 0.745 0.762 0.712 0.750 0.827 0.631 0.849 0.887 0.800 0.850 0.895 0.794
3 0.577 0.669 0.303 0.809 0.798 0.826 0.834 0.889 0.764 0.916 0.936 0.894 0.924 0.951 0.893
4 0.729 0.836 0.570 0.842 0.824 0.871 0.885 0.950 0.813 0.945 0.966 0.921 0.939 0.923 0.958
5 0.850 0.916 0.770 0.851 0.839 0.870 0.899 0.961 0.832 0.940 0.939 0.942 0.943 0.946 0.939
TABLE II: Results of different steganalysis methods on natural text set.
Fig. 2: The distribution of correlations features under different information embedding rates in the feature space.
Format [meng2009linguistic] [samanta2016real] [din2015performance] TS-RNN TS-BiRNN
Metric P R P R P R P R P R
FW FL 0.258 0.297 0.245 0.465 0.473 0.453 0.465 0.472 0.476 0.699 0.698 0.698 0.724 0.722 0.722
EL 0.266 0.311 0.258 0.510 0.511 0.503 0.513 0.515 0.514 0.692 0.688 0.685 0.738 0.734 0.732
SW FL 0.261 0.292 0.253 0.493 0.492 0.485 0.496 0.501 0.498 0.679 0.674 0.674 0.700 0.699 0.697
EL 0.246 0.286 0.239 0.540 0.521 0.533 0.568 0.568 0.566 0.694 0.688 0.685 0.726 0.725 0.724
News 0.445 0.396 0.420 0.701 0.706 0.703 0.745 0.741 0.743 0.905 0.904 0.904 0.908 0.908 0.908
IMDB 0.490 0.512 0.501 0.742 0.745 0.743 0.767 0.760 0.763 0.917 0.914 0.914 0.929 0.927 0.927
Twitter 0.417 0.363 0.303 0.620 0.620 0.620 0.638 0.615 0.626 0.800 0.797 0.798 0.806 0.801 0.803
TABLE III: The results of the proposed models’ estimate of the capacity of the covert information in text.

Iii Experiments and Analysis

We trained our model by using the T-Steg dataset 111https://github.com/YangzlTHU/TS-CNN which was released by Z. Yang et al. [yang2018ts]. The T-Steg dataset contains two categories of steganographic texts: special format steganographic texts (Chinese) generated by the model proposed by Luo et al. [Luo2017Text], and natural steganographic texts (English) generated by the model proposed by Fang et al. [fang2017generating]. In the T-Steg dataset, two different forms of steganographic Chinese poetries are provided: poetries with five words (FW) per line and poetries with seven words (SW) per line. Each format can be further divided into two categories, that is, each poem contains four lines (FL) or eight lines (EL). For natural texts, T-Steg contains the most common text media on the Internet, including Twitter, movie reviews and News. Both steganographic methods can generate steganographic texts with different embedding rate by altering the number of bits hidden in per word (bpw). In T-Steg dataset, it contains 10,000 steganographic sentences for different types of text with different embedding rates.

The hyper-parameters in the proposed model were finally determined based on the comparison experiment: we mapped each word to a 256-dimensions vector. The number of hidden layers was in TS-RNN and in TS-BiRNN. The number of LSTM units per layer was setted to be for TS-RNN and for TS-BiRNN. We setted the detection threshold to be . We used

as nonlinear activation function

in Equation . We chose Adam [Kingma2014Adam] as the optimization method. The learning rates were initially setted as 0.001 and batch size was setted as 128, dropout rate was 0.5.

Iii-a Evaluation Results and Discussion

Iii-A1 Steganalysis Accuracy

In this section, we compaired with three representative text steganalysis algorithms, which are proposed in [meng2009linguistic], [samanta2016real], [din2015performance], respectively. We use several evaluation indicators commonly used in classification tasks to evaluate the performance of our model, which are precision, recall, F1-score and accuracy. Experiment results have been shown in Table I and Table II.

According to the results, we can draw the following conclusions. Firstly, compared to other text steganalysis methods, the proposed models has achieved the best detection results on various metrics, including different text format and different embedding rates. Secondly, in Table I and Table II, we notice that in most cases, the detection performance of each model has improved with the increasing of the embedding rate. These results do meet our previous conjecture that with the increasing of embedding rate, it will damage the coherence of text semantics.

Iii-A2 Embedded Rate Estimation

We can use t-Distributed Stochastic Neighbor Embedding (t-SNE) [Maaten2014Accelerating] technique for the dimensionality reduction and visualization of this feature space, which can be found in Figure 2. In this feature space, each point represents a sentence and different colors indicate different embedding rates. From Figure 2, we can clearly see that as the embedded rate of hidden information increases in the generated texts, their distribution in the semantic space will gradually change. Further, based on these features, we find our model can estimate the capacity of hidden information inside.

We mixed the texts at various embedding rates, i.e. , and then we used different models to conduct multi-classification experiments. The experimental results are shown in Table III. From Table III, we can see that our model can achieve an estimated accuracy higher than 70% and 90% for the hidden information in the special format texts and natural texts, which outperforms all the other models.

Iv Conclusion

In this paper, we use bidirectional recurrent neural network (BiRNN) to extract the conditional distrubution features of each word in texts. Based on the distribution of these features, the proposed model achieves nearly 100% precision and recall. Besides, the proposed model can even make use of the subtle distribution difference of the features to estimate the capacity of the hidden information inside, which shows state-of-the-art performance.

Acknowledgment

This work was supported in part by the National Key Research and Development Program of China under Grant SQ2018YGX210002 and the National Natural Science Foundation of China (No.U1536207, No.U1705261 and No.U1636113).

References