Concealment system is one of the three basic information security systems in cyberspace , its biggest characteristic is the strong concealment of information. When the other two information security systems, which are encryption system and privacy system, ensure information security, they also expose the existence and importance of information, making it more vulnerable to be attacked by interception and cracking. While for the concealment system, it uses various carriers to embed secret information and then transmits them through public channels, in which way the system hides the existence of secret information to achieve the purpose of not being easily suspected and attacked. However, strong concealment can also be used by hackers, terrorists, and other law breakers for malicious intentions. Hence, designing an automatic steganography detection method becomes an increasingly promising and challenging task.
Usually we can model a concealment system as a “Prisoners’ Problem”. In this model, Alice needs to transmit the secret message , which from the secret message space , to Bob. Alice selects a suitable cover from the cover space and embeds the secret message into the cover under the guidance of the hidden key , which is from the key space . The cover becomes a stego carrier after embedding the covert information , and large number of stego carriers constitute the hidden space . The information embedding process can be expressed by the embedding function , that is:
Bob needs to extract the secret message from the received stego carrier under the guidance of key . The extraction process can be expressed by the extraction function , namely:
In order to ensure the concealment of secret information, it is usually required that the elements in and are exactly the same, that is
. But generally speaking, this mapping function will affect the probability distributions, namedand . For Alice and Bob, their main purpose is to ensure the successful transmission of information without arousing Eve’s suspicion, so they need to reduce the difference in statistical distribution of carriers before and after steganography as much as possible, that is:
While for Eve, her task is to accurately determine whether the carrier contains hidden information, so she needs to find the difference as much as possible in the statistical distribution of the carrier before and after steganography.
There are various media forms of carrier that can be used for information hiding, including image, audio[53, 169], text[197, 13] and so on. In recent years, with the popularity and development of the Internet, communication based on streaming media has been greatly developed. Streaming media refers to continuous time-based media that uses streaming technology in the Internet, such as audio, video, or multimedia files. Voice over IP (VoIP)  is one of the most popular streaming communication service in the Internet. Therefore, with these emerging communication channels, more and more VoIP-based covert communication systems have appeared in recent years [46, 45, 34, 104, 169, 171, 170, 53].
Due to the transient and real-time nature of streaming media, steganography methods based on streaming media and static media are very different. For example, methods based on transform domain and spread spectrum are widely used in static audio steganography [107, 106], however their complexity and time consuming make them not suitable for hiding information in streaming media. A typical streaming media can be represented by Figure 1. Each stream media package contains four data fields: audio and video data (which are usually compressed and coded), IP header, UDP header, RTP header. All of these areas can be used to embed secret information. Therefore, the information hiding technology based on streaming media can be roughly divided into the following two categories: information hiding based on protocol headers [201, 146] and information hiding based on payloads [198, 170, 203, 202, 195, 171]. For the steganographic methods based on protocol headers, the secret information is mainly embedded into areas of the protocol header which are not commonly used . This type of methods is simple and easy to implement, but such methods have a low hidden capacity and may cause a great impact on the quality of internet service.
At present, information hiding based on payloads are the most common ways for streaming media steganography [198, 170, 203, 202, 195, 171]. They realize information hiding by modifying redundant information of payloads in streaming media, like speech in VoIP. Currently, according to different communication coding standards, the steganographic methods based on payloads in VoIP can be divided into three big families. The first type of methods is mainly for the high-rate Pulse Code Modulation (PCM) speech coding standard G.711. The Least Significant Bits (LSB) algorithm is widely used in this type of steganographic methods [198, 203]. They mainly utilize the insensitivity of the human perception system to noise, and replace the LSB of the carrier with the secret information to realize information embedding. The second and third type of methods are mainly for low-bit rate compressed speech coding standards, like G.729 and G.723.1. The second type of methods mainly use the uncertainty of pitch period prediction, and fine-tunes the results of the pitch prediction to achieve the purpose of information hiding . The third type of methods realize information hiding mainly by introducing Quantization Index Modulation (QIM)  algorithm to segment and encode the codebook in the process of speech quantization [195, 171]. With the popularity of the Internet and the widespread use of VoIP, these covert communication methods pose an increasingly serious threat to cyberspace security. Therefore, it is of great value and significance to study the steganalysis method of VoIP and realize fast and high-performance detection of real-time speech stream signals for protecting cyberspace and public security.
Compared with steganalysis methods for static carrier, the steganalysis methods for streaming media are more demanding and thus more challenging. Firstly, since the signal streams are transmitted online in real time, the detection algorithm should also be efficient enough. This involves two requirements. On the one hand, the steganalysis model needs to be able to perform high-performance detection on a speech signal which is as short as possible. Therefore, once Alice and Bob are found to be transmitting secret information, Eve can terminate the communication within the shortest time after they establish the communication. On the other hand, the steganalysis model needs to have a sufficiently fast judgment ability. That is, when inputted a speech signal, it is required to complete the process of feature extraction, analysis and judgment in the shortest time. Secondly, a very important characteristic of covert communication based on VoIP is that, for both communicator parties, the capacity of the carrier is completely controllable and can be expanded arbitrarily. This means that Alice and Bob can spread the secret information that needs to be transmitted in a sufficiently long speech signal to achieve a covert communication with low embedding rate. Therefore, from the perspective of steganalysis, achieve high efficiency and high performance detection of streaming media with low embedding rate has always been an important research goal in the field of VoIP steganalysis.
According to the different information hiding regions, steganalysis methods based on streaming media can also be divided into two categories, which are steganalysis based on network protocols  and steganalysis based on payloads [35, 37, 33, 204]. Steganalysis based on network protocols is relatively easier, because each area of the network protocol is carefully designed and clearly defined, the content of each area has obvious statistical characteristics. Once secret information is embedded in some domains of the protocol, the changes can be easily detected . For steganalysis based on payloads, since the corresponding steganographic methods require modification of the carrier to embed information, it is essentially similar to add noise to the carrier, and thus will almost certainly affect the statistical distribution of the carrier in some ways, making it gradually unsatisfactory with formula (3). Therefore, the corresponding steganalysis methods usually analyze the statistical characteristics of the carrier, such as Mel-frequency features , codewords correlations [196, 193, 204] and so on [35, 33], by manual construction or model self-learning, and analyze the difference of statistical distribution of these features before and after steganography, then determine whether the inputted VoIP speech contains hidden information. These methods are usually difficult to balance detection efficiency and detection accuracy. Some of them spend a large amount of time on feature extraction and analysis in order to obtain a relatively high detection accuracy, making them difficult to meet the real-time detection requirements . In addition, some other models may achieve high detection performance at high embedding rate, but when faced with VoIP speech signals with low embedding rate, the detection performance is unsatisfactory .
In order to solve the two major challenges in the field of VoIP steganalysis, namely: high performance and real-time detection for low embedded rate speech signals, in this paper, combined with the sliding window detection algorithm and Convolutional Neural Network (CNN), we propose a real-time VoIP steganalysis method which based on multi-channel convolutional sliding windows (CSW). It uses multi-channel sliding detection windows to extract correlation features between frames and different neighborhood frames in a VoIP signal. Within each sliding window, we design two feature extraction channels to extract both low-leavel features and high-level features of the input signal. We disigned a large number of experiments to verify our model in many aspects. Experimental results showed that the proposed model outperforms all previous methods, especially for low-embedded VoIP speech signals, and achieved state-of-the-art performance.
In the remainder of this paper, Section II introduces related work, including QIM-based VoIP Steganography and Speech Steganalysis. Section III introduces the detailed explanation of the proposed method. The following part, Section IV presents the experimental evaluation results and gives a comprehensive discussion. Finally, conclusions are drawn in Section V.
Ii Related Work
Ii-a QIM-based VoIP Steganography
To reduce bandwidth usage, VoIP signals are typically first compressed at low-bit rate and then transmitted. Since speech signals are generated by organs in respiratory tract. The organs involved are lung, glottis, and vocal track. When passing through glottis, the exhaled breath from lung would turn to a periodic excitation signal. The excitation signals would then go through vocal track. We can divide vocal track into cascaded segments, whose functions can be modeled as one pole filters. Low-rate speech coding standards used by VoIP such as G.729 and G.723 are based on the Linear Predictive Coding (LPC) technique, which uses the LPC filter to analyze and synthesize acoustic signals in the encoding and the decoding processes. The LPC filter can be expressed as:
where is the (quantized) i-th order coefficient of LPC filter.
The G.729 and G.723 standards first solve the optimal LPC prediction coefficients for each frame, and then the LPC coefficients are converted into Line Spectrum Frequency (LSF) coefficients. G.729 and G.723 finally use three codewords to quantize the LSFs. Each codeword has a corresponding codebook whose codeword space is , where represents the -th codeword of the codebook , and is the number of codewords in . When quantizing, they select an optimal codeword from the codebook .
The Quantization Index Modulation (QIM) algorithm was firstly proposed by Chen and Wornell et al. 
. They hid secret information by modifying the quantization vector in the media encoding process, thus QIM steganography could be conducted while quantizing LSPs. According to the basic principle of the QIM algorithm, in the vector quantization stage of speech coding, the codebookis divided into sub-codebooks, that is
The QIM-based VoIP steganography encodes each sub-codebook, and then the input vector can be quantized into different sub-codebook according to the secret information. Usually for QIM-based steganography, the maximum embedding capacity is bits. The basic principle of QIM is shown in Figure 2, in which the codebook is divided into four sub-codebooks, so that each quantization can be embedded with bits of information.
The key point of the QIM steganography is how to effectively divide the original codebook into multiple sub-codebooks. To optimize the division of codebooks, Xiao et al.  proposed a Complementary Neighbor Vertices (CNV) algorithm. This method can optimize the upper limit of quantization distortion after embedding secret information. In the experimental part of this paper, we mainly use the CNV-QIM steganography algorithm as the detection target to test the performance of the proposed method, while the proposed method can also be directly used to detect other QIM-based VoIP steganography methods.
Ii-B Speech Steganalysis
There has been much effort in steganalysis of digital audio [43, 40, 41, 42]. The most common way is to directly extract statistical features from the audio and conduct classification subsequently. For example, C. Kraetzer et al.  analyzed the statistical distribution of Mel-frequency features of audio. Q. Liu et al.  analyzed the statistics of the high-frequency spectrum and the Mel-cepstrum coefficients of the second order derivative, and further in 
, they employed the Mel Frequency Cepstrum Coefficient (MFCC) and Markov transition features from the second-order derivative of the audio signal. Then based on these features, these works used Support Vector Machine (SVM) as classifier to decide whether the inputted speech signal contains hidden information or not.
With the development of neural network technology, more and more speech steganalysis methods based on neural network have appeared in recent years. For example, C. Paulin et al. 
first extracted mel-frequency cepstral coefficients (MFCCs) features of input audio and then used a deep belief networks (DBN) to classify them. S. Rekiket al.  extracted the Line Spectrum Frequency (LSF) features from original audio and then used a Time Delay Neural Networks (TDNN) to detect stego-speech. These methods are performed by manually extracting the statistical characteristics of the speech signals and then using the neural network models for analysis and detection.
These above audio steganalysis algorithms are all aimed at static audio, which can not be directly applied to VoIP steganalysis due to the unique characteristics of VoIP speech signals. In recent years, there have been many steganalysis methods for VoIP speech signals. For example, the authors of  and  thought that steganography of speech streams may lead to low speech quality, so they used Mel-frequency features of speech streams and SVM to determine whether there was hidden information in speech streams. J. Dittmann et al.  successfully implemented the detection of PCM encoded speech stream steganography in the speech stream by extracting the first-order and second-order statistics of the speech stream. Y. Huang et al.  proposed a method for detecting streaming media information hiding based on sliding window. They sampled the speech in the window by selecting an appropriate time window and used Regular Singular (RS) algorithm  to determine if a LSB replacement had occurred in the speech stream. S. Li et al. 
used the Markov model to calculate the transition probabilities between frames and then used SVM to classify these features. Further, in, they considered the transition probability in frames and obtained a better detection effect. Recently, Z. Lin et al. 
proposed a method for extracting the correlation between codewords and frames in a speech stream using a two-layer Recurrent Neural Network (RNN) with Long Short Time Memory (LSTM) units and then performing steganalysis. These methods are difficult to balance the detection accuracy and detection efficiency. Some of them spend too much time on feature extraction and analysis to improve detection performance, which reduce the detection efficiency. In addition, some other models do not fully analyze the features in order to improve the detection efficiency, which in turn affects the detection performance. The proposed method will optimize both of these aspects. In the experimental part, we will make a detailed comparison and analysis.
Iii The Proposed Method
The information-theoretic definition of steganographic security starts with the basic assumption that the cover source can be described by a probability distribution, , on the space of all possible cover, . The value is the probability of selecting cover for hiding a message. For a given stegosystem assuming on its input covers and messages , the distribution of stego cover is . Any steganalysis can be described by a map , where means that is detected as cover, while means that is detected as stego.
Figure 3 shows the overall framework of the proposed VoIP steganalysis method. Our model consists of two parts. For an inputted VoIP speech signal, our model first extracts the speech features using multi-channel convolutional sliding windows. Then, after feature fusion, the discriminator determines whether the input speech contains concealed information by analyzing the difference in statistical distribution of these features.
Iii-a Feature Analysis
VoIP speech signals are compressed and encoded sequence signal, so there may have some strong signature correlations between codewords. For an LSF-encoded speech signal which contains frames, we can express it as:
Here, denotes the -th frame in the speech segment, and denotes the -th codeword in the -th frame. When all cordwords are uncorrelated, their appearances are independent. Therefore, we have
When the two sides of Equation (7) are not equal, we think there is a correlation between the two cordwords and . According to Equation (7), we can define two types of codeword correlation: namely intra-frame correlation, which can be expressed as: , and inter-frame correlation, which can be expressed as: . Z. Lin et al.  further refines the inter-frame correlation into three categories according to the distance between frames: Successive frame Correlation, Cross Frame Correlation, Cross Word Correlation.
Once we embed additional information into these voice streams, it is possible to influence the correlations between these cordwords and change their statistical distribution. Therefore, some of the previous VoIP steganalysis methods tried to extract these correlation features as a basis for steganography judgment. For example, S. Li et al. tried to extract the transition probability feature between inter  and intra  codewords using the Markov model. Z. Lin et al.  used a LSTM Neural Nework to extract the temporal correlations of codewords, which achieved currently the best detection performance.
Iii-B Feature Extraction by Convolutional Sliding Windows
The sliding window algorithm for steganalysis of VoIP speech signal was first proposed by Y. Huang et al. . They used a fixed-length sliding window to slide over the VoIP data stream. Whitin each each sliding window, they used the Regular Singular (RS) algorithm  to extract features and then determined if LSB steganography has occurred in the VoIP speech segment. However, they only used a single-channel sliding window, that is, the sliding window had a fixed length, so it could only extract the correlations between each frame and a fixed range of frames. In this proposed model, in order to extract the correlations between each speech frame and the adjacent frames at different distances, we propose a multi-channel sliding window detection method, each channel uses sliding window of different lengths, as shown in Figure 4.
In the past few years, convolutional neural network has made notable progress in fields such as computer vision[83, 81, 30]52]. The incremental advancement of CNN is likely to benefit the development of new technology and inventions in other fields. A large number of researches and applications have shown that the convolutional neural network has a powerful ability in feature extractions and expressions [83, 52], which does not require hand-designed features but carries out self-learning through plenty of data. Inspired by these works, in the proposed model, for each sliding window, we design a codewords correlation extraction method based on convolutional kernels. Inside each sliding window, we design two feature extraction channels: one is a convolution channel composed of two convolutional layers to extract high-level features, and the other is a skip channel for passing lower-level features.
Convolution operation is a feature extraction process for the elements in the local region of the input matrix. More specifically, suppose that the width of the -th sliding window is , then the convolution kernel of the first convolutional layer can be expressed as , that is
When and coincide, the feature extracted from by the convolutional kernel can be:
where the weight denotes the contribution of the -th value in the -th frame, is the bias term and is a nonlinear function. Here we follow previous works 
and use ReLu function as our nonlinear function, which is defined as
It is worth noting that the feature here is only extracted from a single convolution kernel. Previous works have shown that different convolution kernels may extract features from different aspects of the input signal . Therefore, in order to extract more comprehensive features of the speech signal in the sliding window, we simultaneously use multiple convolution kernels for feature extraction. Suppose the number of convolution kernels in the first convolutional layer is , when and coincide, the extracted features can be expressed as
In order to further extract the codeword correlation features in different regions of the input speech signal, we slide the -th window from the beginning of input speech sample to the end of it, and calculates the features of each local region. Therefore, after the first convolutional layer, the features we extracted can be expressed as , that is
where represents the number of features extracted by each convolution kernel for the input speech signal. When the sliding step is (which we usually set it to be ), the relationship between and the input signal length is:
Previous work on convolutional neural networks has shown that increasing the number of convolutional layers within a certain range is beneficial for extracting higher-level signal features and also for later feature analysis . However, too many layers can lead to over-fitting and reduce detection efficiency. So we add another convolutional layer after the first convolutional layer. The operation of the second convolutional layer and the first convolutional layer are basically the same. It also contains multiple convolution kernels and each has the same width as . Then these convolution kernels slid from top to bottom and using convolution calculations to extract high-level features. Therefore, the features extracted by the second convolutional layer can be expressed as , that is
Where represents the number of features extracted by each convolution kernel of the second layer and represents the number of convolution kernels in the second convolutional layer.
In addition, we also think that although the convolutional kernels extract high-level features, they ignore some details of the original signal, which we think are useful for the final analysis. Therefore, referring to the latest works in the field of neural networks [145, 144], we add an additional skip connection to pass the low-level features of the original signal. In order to facilitate the feature fusion of the low-level features and the high-level features extracted by the convolutional layer, we define a skip matrix on the skip connection. The skip matrix is mainly for dimensional transformation and simple feature extraction of the original signal, which can be expressed as :
Similarly, the low-level features of the original signal it extracts are
where the weight is the bias term and is a nonlinear function which is also setted to be ReLu function.
Iii-C Feature Fusion and Steganography Determine
The model shown in Figure 5 can extract the correlation between each frame in the speech signal and the surrounding frames in different ranges. On this basis, we need to fuse the extracted features further and make steganalysis based on the statistical characteristics of these features. Firstly, we use the pooling layer to fuse the features extracted from multiple convolution kernels in a single sliding channels. The pooling layer is widely used in neural network-related models. It can reduce the number of neural network parameters while maintaining the overall distribution of features, which can effectively prevent the model from over-fitting and improve the robustness of the model [83, 84]
. In the proposed model, we conduct a max pooling operation on the featureand . For example, for the -th row feature extracted in and , the outputs are:
Usually in order to preserve a richer feature distribution, we can use k-max pooling. k-max pooling is a generalisation of the max pooling operator, it returns the subsequence of maximum values in the input features, instead of the single maximum value. An explanation of max pooling and 2-max pooling have been shown in figure 6.
Secondly, we need to fuse the features extracted from different local parts of the input signal by different sliding windows. In order to achieve the fusion of these features, and also to further extract the correlation between the frames in far distance, we add a fully connected forward neural network. To be specifically, we splice the low-level and high-level features of different local regions extracted by a single sliding window, that is, and are spliced into a complete feature vector , that is:
Then we connect the features extracted by different sliding channels and get the complete feature expression of the input speech signal:
where is the number of channels and is the dimension of the merged feature .
Then we define a Feature Fusion Matrix to fuse the features, where is the dimension of the fused feature vector , where:
We can then use the features collected in to classify whether the original speech signal contains secret information. A basic idea is to calculate the linear combination of all features. Following previous works [204, 52], we define the Detection Weight Vector (DWV) as with a length of , and the linear combination is calculated as
To get normalized output between
, we send the value through a sigmoid function:
and the final output is
Finally, we can set a detection threshold (like ) and then the final detection result can be expressed as
To determine the parameters in the proposed model, we follow a supervised learning framework. In the process of training, we update network parameters by applying backpropagation algorithm, and the loss function of the whole network consists of two parts, one is the error term and the other is the regularization term, which can be described as:
where is the batch size of VoIP signals. represents the probability that the -th sample is judged to contain covert information, is the actual label of the
-th sample. The error term in the loss function calculates the average cross entropy between the predicted probability value and the real label. We hope that through self-learning of the model, prediction error can get smaller and smaller, that is, the prediction results are getting closer to the real label. In order to strengthen the regularization and prevent overfitting, we adopt the dropout mechanism and a constraint on l2-norms of the weight vectors during the training process. Dropout mechanism means that in training process of deep learning network, the neural network unit is temporarily discarded from the network, i.e. set to zero, according to a certain probability. This mechanism has been proved to effectively prevent neural network from overfitting, and significantly improve the model’s performance.
Iv Experiments and Analysis
In this section, we conducted several experiments and compared the results with some state of the art VoIP steganalysis methods to verify the validity of the proposed model. This section starts with an introduction of the dataset used in this work, followed by the structure and the hyper-parameters of the model. Finally, we compare the performance of the proposed model and other state-of-the-art VoIP steganalysis models under different conditions, including different embedding rates, different durations and so on.
Iv-a Dataset Collection and Evaluation Method
We use the speech dataset111https://github.com/fjxmlzn/RNN-SM published by Z. Lin et al.  to train and test the performance of the proposed model. This dataset contains 41 hours of Chinese speech and 72 hours of English speech in PCM format with 16 bits per sample from the Internet. The speech samples are from different male and female speakers. Those speech samples make up the cover speech dataset.
For each sample in this speech dataset, they first use the G.729A standard to encode it and get LSF matrix. Random 01 bit stream is embedded using CNV-QIM steganography proposed in , and these samples make up the stego speeches. Embedding rate is defined as the ratio of the number of embedded bits to the whole embedding capacity. Lower embedding rate indicates fewer changes to the original signal streams. When conducting a% embedding rate steganography, we embed each frame with a% probability. In order to test the performance of the proposed model at different embedding rates (especially low embedding rate), we performed 10%, 20%, 30%, …, 100% different embedding rates for the speech samples in the dataset. In addition to embedding rate, sample length is another factor that influences detection accuracy. Samples in the speech dataset are cut into 0.1s, 0.2s,….,10s clips. Segments of the same length are successive and nonoverlapped.
For model training, we random choose 80% of samples, which in the same language and have the same length, from cover speech and stego speech as positive samples and negative samples respectively, and the remaining 20% for test and validate. For example, for 0.1s clips with 1:1 ratio of cover clips and stego clips, the training set has 2,486,708 samples and the test set contains 155,405 clips.
In the experimental part, we mainly use , and to measure the detection performance of each model. Here, calculates the proportion of true results (both true positives and true negatives) among the total number of cases examined:
Where TP (True Positive) represents the number of positive samples that are predicted to be positive by the model, FP (False Positive) indicates the number of negative samples predicted to be positive, FN (False Negative) illustrates the number of positive samples predicted to be negative and TN (True Negative) represents the number of negative samples predicted to be negative.
Iv-B Experimental Setting and Training Details
Almost all the parameters in the proposed model can be obtained through training, but there are still some hyper-parameters that need to be determined, such as the number of channels, the number of concolutional layers, and so on. Generally, increasing these hyper-parameters will enhance network’ representation ability. However, it may also increase the possibility of overfitting and reduce detection performance.
To determine these hyper-parameters, we designed multiple sets of comparative experiments. Table I shows nine different model settings, whose components are slightly different from the final proposed model (model ). Figure 7 shows the detection accuracy of these nine fine-tuning models for 10s’ samples with 10% embedding rate.
|a||The full proposed model.|
|b||Remove the skip connection.|
|c||Replace all 2-max-pooling with max pooling.|
|d||All pooling layer use 2-max pooling.|
|e||All pooling layer use 3-max pooling.|
|f||Remove the First convolutional layer.|
|g||Remove the second convolutional layer.|
|h||Set the number of the convolutional layer to 3.|
|i||Set the number of the convolutional layer to 4.|
|j||Change the number of channels of the sliding window to 2.|
Firstly, we can see from Figure 7 that the convolutional layer has a very significant impact on the detection performance of the entire model. For example, when we compare model with model and , we can see that when we delete any convolution layer, the detection performance of the final model is greatly reduced. But when we increase the number of layers of the convolution layer, such as the models and , it also reduces the detection performance of the model. This is consistent with our previous analysis, that is, it’s not the more the number of convolution layers, the better the performance, because more layers will increase the risk of over-fitting and then lower the detection performance. Secondly, when we compare model and model , we find it necessary to add the skip connection in each sliding window. As we analyzed in model section, the added skip connection in the sliding window is mainly to pass some original features of the input signal. Because we think these original features contain more details and can be helpful for the final steganalysis, which has been proved by the experimental results. Third, comparing the results of model and model , we find that increasing the number of channels is valuable. Because for the original single channel sliding window detection algorithm, it can only extract the correlation within the fixed neighborhood of each signal frame. However, multi-channel sliding windows can extract the correlation of different neighborhoods around each frame. Therefore, the detection performance of multi-channel sliding windows is better. Finally, we also compare the impact of the pooling operation on the final performance during the feature fusion part. Specifically, we can compare the detection results of model , model and model . Through multiple comparison experiments, the pooling layer is finally setted as follows: 2-max pooling at the end of convolutional layer and 1-max pooling at the end of skip connected layer.
|English||IDC ||Acc (%)||51.60||58.55||63.65||71.50||76.25||83.50||87.25||91.60||95.55||97.20|
|SS-QCCN ||Acc (%)||54.40||75.45||92.45||97.35||99.15||99.60||100.00||100.00||99.95||99.30|
|RNN-SM ||Acc (%)||59.64||92.44||94.56||96.90||97.76||98.77||99.24||99.71||99.79||98.78|
|Chinese||IDC ||Acc (%)||52.75||59.25||65.55||71.40||78.50||82.60||89.15||93.60||96.05||98.05|
|SS-QCCN ||Acc (%)||57.35||75.00||92.00||98.25||99.50||99.85||100.00||99.95||99.90||99.75|
|RNN-SM ||Acc (%)||55.14||74.19||90.12||95.24||98.05||98.25||99.09||99.51||99.76||99.55|
Finally, to synthesize the performance of all aspects of the model, the structure and hyper-parameters of the proposed model are setted as follows. We use three detection channels with corresponding sliding window lengths of , , and , respectively. The channel with length mainly extracts intra-frame correlation, and the latter two channels mainly extract inter-frame correlation in different neighborhoods. The first convolutional layer in each sliding window contains convolution kernels. The width of the second layer of convolution kernels in three different channels is , , and , and each contains convolution kernels . The size of skip matrix is setted to be . The dimension of the spliced feature vector is . After the feature fusion of the fully connected layer, the dimension of the feature vector is . During model training, in order to strengthen the regularization and prevent overfitting, we adopt the dropout mechanism  during the training process. We choose Adam  as the optimization method. The learning rates are initially set as 0.001 and batch size is set as 256, dropout rate is 0.5.
Iv-C Evaluation Results and Discussion
In order to objectively reflect the performance of the proposed model, in this section, we choose three state-of-the-art VoIP steganalysis algorithms which are: IDC  and SS-QCCN  and RNN-SM  as our baselines. IDC proposed in  extracted the transition probability between codewords in inter frames. SS-QCCN  further took the transition probability in intra frames into consideration. Both of these two models used SVM as classifier. RNN-SM  used a 2-layer Recurrent Neural Networks (RNNs) with Long Short Time Memory (LSTM) units to extract the codeword correlation features in VoIP streams and then used a feature classification model to classify those correlation features into cover speech and stego speech categories.
Iv-C1 Performance Under Different Embedding Rates
Embedding rate (ER) is defined as the ratio of the number of embedded bits to the whole embedding capacity. In general, when the embedding rate is small, the statistical distribution of the carrier before and after steganography is small, making it easier to satisfy formula (3) and more difficult to be detected. In reality, Alice and Bob may spread the secret information over a long time frame to embed, thus reducing the average embedding rate of information to ensure the concealment of their communication. Therefore, effective steganographic detection of VoIP speech signals at low embedding rates has long been a very challenging but also very realistic research goal. We tested the detection performance of each model at different embedding rates when the length of the speech samples is 10s and the results are shown in Table 2. According to the results in Table 2, we can draw the following conclusions. Firstly, as the embedding rate increases, the detection accuracy of each models increase, which is consistent with our previous analysis. In order to visually reflect this situation, we plot the detection accuracy of each model at different embedding rates, as shown in Figure 8. Compared to other VoIP steganalysis methods, the proposed models has achieved the best detection performance in most cases, including different language and different embedding rates.
Secondly, we find that when the embedding rate is relatively high (for example, 80%), existing models can basically detect the steganography of VoIP speech signals effectively, and the accuracy rate can reach more than 90%. However, when the embedding rate is very low, such as only 10%, existing models show unsatisfactory detection performance, and the proposed model shows great advantages over other models (English: Ours (83.48%) VS RNN-SM  (59.64%), Chinese: Ours (77.18%) VS RNN-SM  (59.14%)). These results prove that the proposed model can extract the statistical distribution difference of VoIP speech signal features before and after steganography more effectively than other methods. Even when the embedding rate is very low and the difference of statistical features is not obvious, we can still detect with high accuracy. We use t-Distributed Stochastic Neighbor Embedding (t-SNE) technology to reduce and visualize the VoIP speech signal features (vector in Equation (21)) extracted by the proposed model. The results are shown in Figure 9. In Figure 9, each point represents an input VoIP speech signal, the blue points indicate cover speeches, and the green points indicate stego points. From Fig. 9, we can find that when the embedding rate is only 10%, the cover speech and the stego speech have a large overlapping area in the feature space, which makes steganalysis models difficult to distinguish them. However, as the embedding rate increases, the distribution of the cover speech and the stego speech in the feature space is gradually separated. When the embedding rate is greater than 40%, the boundary can be clearly seen from the feature space. The results in Figure 9 intuitively reflect the proposed model’s ability to extract and analyze the features of steganographic speech at different embedding rates.
Thirdly, from Table 2, we also notice that, in most of the cases, the accuracy of English speeches are higher than that of Chinese speech samples when they have the same embedding rate. This phenomenon may be explained by the different characteristics of the two languages. English is composed by 20 vowels and 28 consonants. However, in Chinese, there are 412 kinds of syllables. The diversity makes the correlations between codewords in Chinese more complicated, and it is therefore more difficult to perform steganographic detection.
Iv-C2 Performance Under Different Clip Length
|Language||Method||Metric||Sample Length (s)|
|English||IDC ||Acc (%)||85.40||88.00||88.50||89.25||90.10||91.45||91.40||92.40||92.95||93.70|
|SS-QCCN ||ACC (%)||82.00||88.85||92.15||95.00||95.70||96.15||96.25||96.90||96.90||98.00|
|RNN-SM ||ACC (%)||90.40||95.50||97.38||97.81||98.16||98.23||98.38||98.48||98.49||98.54|
|Chinese||IDC ||ACC (%)||86.80||88.65||90.20||90.50||91.20||92.25||93.10||94.25||94.70||94.05|
|SS-QCCN ||ACC (%)||81.20||90.05||93.75||95.25||96.50||97.45||97.60||98.30||98.10||98.50|
|RNN-SM ||ACC (%)||90.91||95.91||97.03||97.72||98.09||98.12||98.51||98.69||99.06||98.86|
Compared with the steganalysis methods based on static carrier, since VoIP speech signals are transmitted online in real time, we usually require VoIP steganalysis to achieve sufficiently high detection accuracy in a short enough time. Therefore, we tested the performance of each model for different speech lengths. Table 3 shows the detection performance of each model for different length speeches when the embedding rate is 100%. From the results in Table 3, firstly, we notice that as the length of speech changes from short to long, the detection accuracy of each model is gradually improved. The reason might be that there are codeword correlations between various distance frames in the speech signal. In general, the correlations between nearer frames are stronger than the correlations between distant frames. Therefore, once the covert information is embedded in the speech signal, it first effects the correlations between the distant frames, and which are easier to be found in the longer speech signal. The shorter speech signals have stronger codeword correlations and are less likely to be corrupted by the embedded secret information, so they are more difficult to detect. Secondly, the experimental results also show that the detection performance of the proposed model is superior to other models for most cases. In particular, when the length is short, for example, only 0.1 seconds, the proposed method can achieve 91.59% (English) and 91.84% (Chinese) of the detection accuracy and exceed all previous models. This means that if Alice and Bob are making covert communication using VoIP, Eve can have more than 91% confidence in whether they are transmitting secret information within 0.1 second after their call start. This will further enhance Eve’s maintenance of network security, which has very important practical significance.
|Device||Method||Sample Length (s)|
Combining the previous two experiments, we further tested the performance of each model in terms of short durations and low embedding rates. Table 4 lists the detection results of each method when clip length are 0.1s, 0.3s, 0.5s and 1s and embedding rate ranges from 10% to 40%. From Table 4 we can get the same conclusions with the previous two sets of experiments, that is, as the embedding rate and speech length increase, the detection performance of each model gradually increases. At the same time, we also notice that effective steganalysis for the VoIP speech signals with an extremely short length (less than 1 second) and low embedding rate (less than 30%) are still very challenging. Nevertheless, our method has made remarkable progress compared with the previous methods and we think it has very important value and significance.
Iv-C3 Time Efficiency in Different Clips
As we have analyzed before, VoIP speech signals are transmitted online in real time. Therefore, we usually require the VoIP Steganography analysis model to be efficient enough to meet real-time requirements. In the previous experiments, we tested the performance of each model for different speech signal lengths (Table 3). Here we further test the detection efficiency of the proposed model. Both IDC  and SS-QCCN  depend on SVM algorithm and they take too much time to extract features from the VoIP speech stream. Therefore, they are not suitable for online real-time detection. The RNN-SM model proposed by Z. Lin et al.  also has the ability for fast detection of VoIP tream, and both RNN-SM and our model are based on neural network models, thus we mainly compare the efficiency of our model with RNN-SM model. We tested the detection efficiency of these two models for different lengths of speech in the same environment. The results are shown in Table 5 and Figure 10.
As can be seen from Table 5 and Figure 10, firstly, as the length of the speech increases, the detection time required for both models increases almost linearly. Secondly, we also note that the detection time of our model increases with the sample length at a much slower rate than that of RNN-SM model. For example, when the sample length is only 0.1 seconds, there are little difference in detection time between these two models (Ours: 0.5190.147 (ms) VS RNN-SM : 0.5910.125 (ms)). When the sample length increases to 10 seconds, the detection time required by RNN-SM is more than times that of the proposed model (Ours: 2.8760.784 (ms) VS RNN-SM : 38.6453.595 (ms)). This is because the RNN-SM model uses a frame-by-frame iterative calculation when extracting inter-frame correlation features. In our model, we extract correlation features of all frames in a sliding window at one time, so the time cost is shorter. This part of the experiment further proves the high efficiency of the proposed model and enables almost real-time steganalysis and detection of VoIP voice signals, which has strong practical significance.
In order to solve the two major challenges in the field of VoIP steganalysis, namely: high performance and real-time detection for low embedded rate speech signals, in this paper, combined with the sliding window detection algorithm and Convolutional Neural Network (CNN), we propose a real-time VoIP steganalysis method which based on multi-channel convolutional sliding windows (CSW). It uses multi-channel sliding detection windows to extract correlations features between frames and different neighborhood frames in a VoIP signal. Within each sliding window, we design two feature extraction channels to extract both low-leavel features and high-level features of the input signal. We used a large number of experiments to verify our model in many aspects. Experimental results show that the proposed model outperforms all the previous methods, especially in the case of low embedding rate, which shows state-of-the-art performance. In addition, we also tested the detection efficiency of the proposed model, and the results show that it can achieve almost real-time detection of VoIP speech signals. We hope that this paper will serve as a reference guide for researchers to facilitate the design and implementation of better VoIP steganalysis.
This research is supported by the National Key RD Program (SQ2018YGX210002) and the National Natural Science Foundation of China (No.U1536207 and No.U1636113).
-  M. Shell. (2007) IEEEtran homepage. [Online]. Available: http://www.michaelshell.org/tex/ieeetran/
-  Z. Zhou, H. Sun, R. Harit, X. Chen, and X. Sun, “Coverless image steganography without embedding,” in International Conference on Cloud Computing and Security. Springer, 2015, pp. 123–132.
-  Y. Huang, S. Tang, C. Bao, and Y. J. Yip, “Steganalysis of compressed speech to detect covert voice over internet protocol channels,” Iet Information Security, vol. 5, no. 1, pp. 26–32, 2011.
-  Z. Lin, Y. Huang, and J. Wang, “Rnn-sm: Fast steganalysis of voip streams using recurrent neural network,” IEEE Transactions on Information Forensics & Security, vol. PP, no. 99, pp. 1–1, 2018.
-  A. W. Bitar, R. Darazi, J.-F. Couchot, and R. Couturier, “Blind digital watermarking in pdf documents using spread transform dither modulation,” Multimedia Tools and Applications, vol. 76, no. 1, pp. 143–161, 2017.
-  M. Yedroudj, M. Chaumont, and F. Comby, “How to augment a small learning set for improving the performances of a cnn-based steganalyzer?” Electronic Imaging, vol. 2018, no. 7, pp. 1–7, 2018.
-  P. Meng, L. Hang, W. Yang, Z. Chen, and H. Zheng, Linguistic Steganography Detection Algorithm Using Statistical Language Model. IEEE Computer Society, 2009.
-  Z. Yu, L. Huang, Z. Chen, L. Li, X. Zhao, and Y. Zhu, “Steganalysis of synonym-substitution based natural language watermarking,” International Journal of Multimedia & Ubiquitous Engineering, vol. 4, 2012.
-  N. Gupta and N. Sharma, “Dwt and lsb based audio steganography,” in Optimization, Reliabilty, and Information Technology (ICROIT), 2014 International Conference on. IEEE, 2014, pp. 428–431.
-  M. Topkara, U. Topkara, and M. J. Atallah, “Information hiding through errors: a confusing approach,” in Security, Steganography, and Watermarking of Multimedia Contents IX, vol. 6505. International Society for Optics and Photonics, 2007, p. 65050V.
-  F. A. Petitcolas, R. J. Anderson, and M. G. Kuhn, “Information hiding-a survey,” Proceedings of the IEEE, vol. 87, no. 7, pp. 1062–1078, 1999.
-  G. Kipper, Investigator’s guide to steganography. crc press, 2003.
-  Z. Yang, P. Zhang, M. Jiang, Y. Huang, and Y.-J. Zhang, “Rits: Real-time interactive text steganography based on automatic dialogue model,” in International Conference on Cloud Computing and Security. Springer, 2018, pp. 253–264.
Z. Zhou, Y. Mu, and Q. J. Wu, “Coverless image steganography using partial-duplicate image retrieval,”Soft Computing, pp. 1–12, 2018.
-  J.-F. Couchot, R. Couturier, and C. Guyeux, “Stabylo: steganography with adaptive, bbs, and binary embedding at low cost,” annals of telecommunications-annales des télécommunications, vol. 70, no. 9-10, pp. 441–449, 2015.
-  I. Cox, M. Miller, J. Bloom, J. Fridrich, and T. Kalker, Digital watermarking and steganography. Morgan Kaufmann, 2007.
-  M. Barni and F. Bartolini, Watermarking systems engineering: enabling digital assets security and other applications. CRC Press, 2004.
-  Y. Rubner, C. Tomasi, and L. J. Guibas, “The earth mover’s distance as a metric for image retrieval,” International journal of computer vision, vol. 40, no. 2, pp. 99–121, 2000.
Q. Le and T. Mikolov, “Distributed representations of sentences and documents,” in
International Conference on Machine Learning, 2014, pp. 1188–1196.
-  C. E. Shannon, “Communication theory of secrecy systems,” Bell Labs Technical Journal, vol. 28, no. 4, pp. 656–715, 1949.
-  H. Kwon, Y. Kim, S. Lee, and J. Lim, “A tool for the detection of hidden data in microsoft compound document file format,” in International Conference on Information Science and Security, 2008, pp. 141–146.
-  M. Chapman and G. Davida, “Hiding the hidden: A software system for concealing ciphertext as innocuous text,” in International Conference on Information and Communications Security. Springer, 1997, pp. 335–345.
-  S. Xin-guang and L. Hui, “A steganalysis method based on the distribution of space characters,” in Communications, Circuits and Systems Proceedings, 2006 International Conference on, vol. 1. IEEE, 2006, pp. 54–56.
-  L. Li, L. Huang, X. Zhao, W. Yang, and Z. Chen, “A statistical attack on a kind of word-shift text-steganography,” in Intelligent Information Hiding and Multimedia Signal Processing, 2008. IIHMSP’08 International Conference on. IEEE, 2008, pp. 1503–1507.
-  F. Li, X. Zhang, B. Chen, and G. Feng, “Jpeg steganalysis with high-dimensional features and bayesian ensemble classifier,” IEEE Signal Processing Letters, vol. 20, no. 3, pp. 233–236, 2013.
-  C. Cachin, “An information-theoretic model for steganography,” Information and Computation, vol. 192, no. 1, pp. 41–56, 2004.
-  G. J. Simmons, “The prisoners’ problem and the subliminal channel,” Advances in Cryptology Proc Crypto, pp. 51–67, 1984.
-  L. Bernaille and R. Teixeira, “Early recognition of encrypted applications,” in International Conference on Passive and Active Network Measurement, 2007, pp. 165–175.
-  J. C. Judge, “Steganography: past, present, future,” Lawrence Livermore National Lab., CA (US), Tech. Rep., 2001.
-  Z. Yang, Y.-J. Zhang, S. ur Rehman, and Y. Huang, “Image captioning with object detection and localization,” in International Conference on Image and Graphics. Springer, 2017, pp. 109–118.
-  T. Fang, M. Jaggi, and K. Argyraki, “Generating steganographic text with lstms,” arXiv preprint arXiv:1705.10742, 2017.
A. Shniperov and K. Nikitina, “A text steganography method based on markov chains,”Automatic Control and Computer Sciences, vol. 50, no. 8, pp. 802–808, 2016.
-  Y. Huang, S. Tang, and Y. Zhang, “Detection of covert voice-over internet protocol communications using sliding window-based steganalysis,” IET communications, vol. 5, no. 7, pp. 929–936, 2011.
-  E. Xu, B. Liu, L. Xu, Z. Wei, B. Zhao, and J. Su, “Adaptive voip steganography for information hiding within network audio streams,” in Network-Based Information Systems (NBiS), 2011 14th International Conference on. IEEE, 2011, pp. 612–617.
-  J. Dittmann, D. Hesse, and R. Hillert, “Steganography and steganalysis in voice-over ip scenarios: operational aspects and first experiences with a new steganalysis tool set,” in Security, Steganography, and Watermarking of Multimedia Contents VII, vol. 5681. International Society for Optics and Photonics, 2005, pp. 607–619.
-  S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.
-  C. Kraetzer and J. Dittmann, “Pros and cons of mel-cepstrum based audio steganalysis using svm classification,” in International Workshop on Information Hiding. Springer, 2007, pp. 359–377.
-  J. Fridrich, M. Goljan, and R. Du, “Detecting lsb steganography in color, and gray-scale images,” IEEE multimedia, vol. 8, no. 4, pp. 22–28, 2001.
-  S. Rekik, S.-A. Selouani, D. Guerchi, and H. Hamam, “An autoregressive time delay neural network for speech steganalysis,” in 2012 11th international conference on information science, signal processing and their applications (ISSPA). IEEE, 2012, pp. 54–58.
-  C. Paulin, S.-A. Selouani, and E. Hervet, “Audio steganalysis using deep belief networks,” International Journal of Speech Technology, vol. 19, no. 3, pp. 585–591, 2016.
-  C. Kraetzer and J. Dittmann, “Mel-cepstrum-based steganalysis for voip steganography,” in Security, steganography, and watermarking of multimedia contents IX, vol. 6505. International Society for Optics and Photonics, 2007, p. 650505.
-  Q. Liu, A. H. Sung, and M. Qiao, “Derivative-based audio steganalysis,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 7, no. 3, p. 18, 2011.
-  ——, “Temporal derivative-based spectrum and mel-cepstrum audio steganalysis,” IEEE Transactions on Information Forensics and Security, vol. 4, no. 3, pp. 359–368, 2009.
-  J. C. Pelaez, “Using misuse patterns for voip steganalysis,” in Database and Expert Systems Application, 2009. DEXA’09. 20th International Workshop on. IEEE, 2009, pp. 160–164.
-  H. Tian, K. Zhou, H. Jiang, Y. Huang, J. Liu, and D. Feng, “An adaptive steganography scheme for voice over ip,” in Circuits and Systems, 2009. ISCAS 2009. IEEE International Symposium on. IEEE, 2009, pp. 2922–2925.
-  M. Hamdaqa and L. Tahvildari, “Relack: a reliable voip steganography approach,” in Secure Software Integration and Reliability Improvement (SSIRI), 2011 Fifth International Conference on. IEEE, 2011, pp. 189–197.
-  B. Goode, “Voice over internet protocol (voip),” Proceedings of the IEEE, vol. 90, no. 9, pp. 1495–1517, 2002.
-  W. Dai, Y. Yu, and B. Deng, “Bintext steganography based on markov state transferring probability,” in Proceedings of the 2nd International Conference on Interaction Sciences: Information Technology, Culture and Human. ACM, 2009, pp. 1306–1311.
-  B. Murphy and C. Vogel, “The syntax of concealment: reliable methods for plain text information hiding,” Proc Spie, 2007.
-  L. Shang, Z. Lu, and H. Li, “Neural responding machine for short-text conversation,” pp. 52–58, 2015.
-  Y. Luo, Y. Huang, F. Li, and C. Chang, “Text steganography based on ci-poetry generation using markov chain model,” Ksii Transactions on Internet & Information Systems, vol. 10, no. 9, pp. 4568–4584, 2016.
-  Z. Yang, Y. Huang, Y. Jiang, Y. Sun, Y.-J. Zhang, and P. Luo, “Clinical assistant diagnosis for electronic medical record based on convolutional neural network,” Scientific reports, vol. 8, no. 1, p. 6329, 2018.
-  Z. Yang, X. Peng, and Y. Huang, “A sudoku matrix-based method of pitch period steganography in low-rate speech coding,” in International Conference on Security and Privacy in Communication Systems. Springer, 2017, pp. 752–762.
-  A. J. Figueredo and P. S. A. Wolf, “Assortative pairing and life history strategy - a cross-cultural study.” Human Nature, vol. 20, pp. 317–330, 2009.
-  S. M. Kang and P. W. Wagacha, “Extracting diagnosis patterns in electronic medical records using association rule mining,” International Journal of Computer Applications, vol. 108, no. 15, 2014.
-  P. Nigam, “Applying deep learning to icd-9 multi-label classification from medical records,” 2016.
-  Z. Chen, L. Huang, Z. Yu, W. Yang, L. Li, X. Zheng, and X. Zhao, “Linguistic steganography detection using statistical characteristics of correlations between words,” in International Workshop on Information Hiding. Springer, 2008, pp. 224–235.
-  B. Chen, W. Luo, and H. Li, “Audio steganalysis with convolutional neural network,” in Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security. ACM, 2017, pp. 85–90.
-  Y. Qian, J. Dong, W. Wang, and T. Tan, “Learning and transferring representations for image steganalysis using convolutional neural network,” in Image Processing (ICIP), 2016 IEEE International Conference on. IEEE, 2016, pp. 2752–2756.
-  S. Wu, S.-h. Zhong, and Y. Liu, “A novel convolutional neural network for image steganalysis with shared normalization,” arXiv preprint arXiv:1711.07306, 2017.
-  D. Bashkirova, “Convolutional neural networks for image steganalysis,” BioNanoScience, vol. 6, no. 3, pp. 246–248, 2016.
-  R. Din, S. A. M. Yusof, A. Amphawan, H. S. Hussain, H. Yaacob, N. Jamaludin, and A. Samsudin, “Performance analysis on text steganalysis method using a computational intelligence approach,” in Proceeding of International Conference on Electrical Engineering, Computer Science and Informatics (EECSI 2015), Palembang, Indonesia, 2015, pp. 19–20.
-  S. Samanta, S. Dutta, and G. Sanyal, “A real time text steganalysis by using statistical method,” in Engineering and Technology (ICETECH), 2016 IEEE International Conference on. IEEE, 2016, pp. 264–268.
-  C. M. Taskiran, M. Topkara, and E. J. Delp, “Attacks on lexical natural language steganography systems,” Proceedings of SPIE - The International Society for Optical Engineering, vol. 6072, pp. 607 209–607 209–9, 2006.
-  R. Din, T. Z. Tuan Muda, P. Lertkrai, M. N. Omar, A. Amphawan, and F. A. Aziz, “Text steganalysis using evolution algorithm approach.” 11th WSEAS International Conference on Information Security and Privacy (ISP’12), 2012.
-  R. H. A. Rauf and N. Jamal, “Feasibility of text visualization in text steganalysis.” in SoMeT, 2014, pp. 103–115.
-  L. Xiang, X. Sun, G. Luo, and B. Xia, “Linguistic steganalysis using the features derived from synonym frequency,” Multimedia Tools & Applications, vol. 71, no. 3, pp. 1893–1911, 2014.
-  H. Yang and X. Cao, “Linguistic steganalysis based on meta features and immune mechanism,” Chinese Journal of Electronics, vol. 19, no. 4, pp. 661–666, 2010.
-  L. V. Lita, S. Yu, S. Niculescu, and J. Bi, “Large scale diagnostic code classification for medical pati ent records,” 2008.
-  P. Jackson, “Introduction to expert systems, 3rd edition,” 1999.
J. Medori, “Machine learning and features selection for semi-automatic icd-9-cm encoding,” inNAACL Hlt 2010 Second Louhi Workshop on Text and Data Mining of Health Documents, 2010, pp. 84–89.
-  B. Ribeiro-Neto, A. H. F. Laender, and L. R. S. D. Lima, “An experimental study in automatically categorizing medical documents,” Journal of the Association for Information Science & Technology, vol. 52, no. 5, p. 391–401, 2001.
-  Junyi-Sun, “Jieba,” https://github.com/fxsjy/jieba.
-  P. Salvaneschi, M. Cadei, and M. Lazzari, “Applying ai to structural safety monitoring and evaluation,” IEEE Expert, vol. 11, no. 4, pp. 24–34, 2002.
-  P. Salvaneschi, A. Masera, M. Lazzari, and S. Lancini, “Diagnosing ancient monuments with expert software,” Structural Engineering International, vol. 7, no. 4, pp. –, 1997.
-  J. P. Pestian, C. Brew, D. J. Hovermale, N. Johnson, and K. B. Cohen, “A shared task involving multi-label classification of clinical free text,” in The Workshop on Bionlp 2007: Biological, Translational, and Clinical Language Processing, 2007, pp. 97–104.
-  M. Peleg, S. Keren, and Y. Denekamp, “Mapping computerized clinical guidelines to electronic medical records: Knowledge-data ontological mapper (kdom),” Journal of biomedical informatics, vol. 41, no. 1, pp. 180–201, 2008.
-  X. Bouthillier, K. Konda, P. Vincent, and R. Memisevic, “Dropout as data augmentation,” Computer Science, 2015.
-  N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
-  Y. Kim, “Convolutional neural networks for sentence classification,” arXiv preprint arXiv:1408.5882, 2014.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
-  W. F. Stewart, N. R. Shah, M. J. Selna, R. A. Paulus, and J. M. Walker, “Bridging the inferential gap: The electronic health record and clinical evidence,” Health Affairs, vol. 26, no. 2, pp. w181–91, 2007.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” inInternational Conference on Neural Information Processing Systems, 2012, pp. 1097–1105.
-  Y. L. Boureau, J. Ponce, and Y. Lecun, “A theoretical analysis of feature pooling in visual recognition,” in International Conference on Machine Learning, 2010, pp. 111–118.
-  W. V. Melle, “Mycin: a knowledge-based consultation program for infectious disease diagnosis †,” International Journal of Man-Machine Studies, vol. 10, no. 3, pp. 313–322, 1978.
-  P. Ramnarayan, G. Kulkarni, A. Tomlinson, and J. Britto, “Isabel: a novel internet-delivered clinical decision support system,” 2004.
-  L. J. Bisson, J. T. Komm, G. A. Bernas, M. S. Fineberg, J. M. Marzo, M. A. Rauh, R. J. Smolinski, and W. M. Wind, “Accuracy of a computer-based diagnostic program for ambulatory patients with knee pain.” American Journal of Sports Medicine, vol. 42, no. 10, pp. 2371–6, 2014.
-  G. O. Barnett, J. J. Cimino, J. A. Hupp, and E. P. Hoffer, “Dxplain. an evolving diagnostic decision-support system.” Jama, vol. 258, no. 1, p. 67, 1987.
-  H. R. Warner, “Iliad as an expert consultant to teach differential diagnosis,” in Symposium on Computer Application, 1988, pp. 371–376.
-  R. A. Miller and M. F. Jr, “Use of the quick medical reference (qmr) program as a tool for medical education.” Methods of Information in Medicine, vol. 28, no. 4, p. 340, 1989.
-  F. T. de Dombal, D. J. Leaper, J. R. Staniland, A. P. Mccann, and J. C. Horrocks, “Computer-aided diagnosis of acute abdominal pain,” Br Med J, vol. 2, no. 5809, pp. 9–13, 1972.
-  R. Hillestad, J. Bigelow, A. Bower, F. Girosi, R. Meili, R. Scoville, and R. Taylor, “Can electronic medical record systems transform health care? potential health benefits, savings, and costs,” Health Affairs, vol. 24, no. 5, p. 1103, 2005.
-  H. Tang and J. H. K. Ng, “Googling for a diagnosis—use of google as a diagnostic aid: internet based study,” Bmj, vol. 333, no. 7579, p. 1143, 2006.
-  R. W. White and E. Horvitz, “Cyberchondria:studies of the escalation of medical concerns in web search,” Acm Transactions on Information Systems, vol. 27, no. 4, pp. 1–37, 2009.
-  R. Collobert, J. Weston, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (almost) from scratch,” Journal of Machine Learning Research, vol. 12, no. 1, pp. 2493–2537, 2011.
-  T. Mikolov, W. T. Yih, and G. Zweig, “Linguistic regularities in continuous space word representations,” In HLT-NAACL, 2013.
-  T. Mikolov, M. Karafiát, L. Burget, J. Cernocký, and S. Khudanpur, “Recurrent neural network based language model,” in INTERSPEECH 2010, Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September, 2010, pp. 1045–1048.
Turney, D. Peter, Pantel, and Patrick, “From frequency to meaning: vector
space models of semantics,”
Journal of Artificial Intelligence Research, vol. 37, no. 1, pp. 141–188, 2010.
-  Y. Bengio, P. Vincent, and C. Janvin, “A neural probabilistic language model,” Journal of Machine Learning Research, vol. 3, no. 6, pp. 1137–1155, 2003.
-  B. Gann, “Giving patients choice and control: health informatics on the patient journey,” Yearb Med Inform, vol. 7, no. 1, pp. 70–73, 2012.
-  J. Paparrizos, R. W. White, and E. Horvitz, “Screening for pancreatic adenocarcinoma using signals from web search logs: Feasibility study and results,” J Oncol Pract, 2016.
-  J. E. Groopman, How doctors think. Houghton Mifflin, 2007.
-  H. U. Prokosch and T. Ganslandt, “Perspectives for medical informatics. reusing the electronic medical record for clinical research.” Methods of Information in Medicine, vol. 48, no. 1, pp. 38–44, 2009.
-  D. M. Ballesteros L and J. M. Moreno A, “Highly transparent steganography model of speech signals using efficient wavelet masking,” Expert Systems with Applications, vol. 39, no. 10, pp. 9141–9149, 2012.
-  R. Chowdhury, D. Bhattacharyya, S. K. Bandyopadhyay, and T.-h. Kim, “A view on lsb based audio steganography,” International Journal of Security and Its Applications, vol. 10, no. 2, pp. 51–62, 2016.
-  H. Ghasemzadeh and M. H. Kayvanrad, “Toward a robust and secure echo steganography method based on parameters hopping,” in Signal Processing and Intelligent Systems Conference (SPIS), 2015. IEEE, 2015, pp. 143–147.
-  R. Kaur, A. Thakur, H. S. Saini, and R. Kumar, “Enhanced steganographic method preserving base quality of information using lsb, parity and spread spectrum technique,” in Advanced Computing & Communication Technologies (ACCT), 2015 Fifth International Conference on. IEEE, 2015, pp. 148–152.
-  H. Matsuoka, “Spread spectrum audio steganography using sub-band phase shifting,” in Intelligent Information Hiding and Multimedia Signal Processing, 2006. IIH-MSP’06. International Conference on. IEEE, 2006, pp. 3–6.
-  S. Kumar, B. Barnali, and G. Banik, “Lsb modification and phase encoding technique of audio steganography revisited,” International Journal of Advanced Research in Computer and Communication Engineering, vol. 1, no. 4, pp. 1–4, 2012.
-  D.-Y. Huang and T. Y. Yeo, “Robust and inaudible multi-echo audio watermarking,” in Pacific-Rim Conference on Multimedia. Springer, 2002, pp. 615–622.
-  D. Kirovski and H. S. Malvar, “Spread-spectrum watermarking of audio signals,” IEEE transactions on signal processing, vol. 51, no. 4, pp. 1020–1033, 2003.
-  W. Bender, D. Gruhl, N. Morimoto, and A. Lu, “Techniques for data hiding,” IBM systems journal, vol. 35, no. 3.4, pp. 313–336, 1996.
-  R. Sridevi, A. Damodaram, and S. Narasimham, “Efficient method of audio steganography by modified lsb algorithm and strong encryption key with enhanced security.” Journal of Theoretical & Applied Information Technology, vol. 5, no. 6, 2009.
-  P. Jayaram, H. Ranganatha, and H. Anupama, “Information hiding using audio steganography–a survey,” The International Journal of Multimedia & Its Applications (IJMA) Vol, vol. 3, pp. 86–96, 2011.
-  K. Gopalan, “Audio steganography using bit modification,” in Multimedia and Expo, 2003. ICME’03. Proceedings. 2003 International Conference on, vol. 1. IEEE, 2003, pp. I–629.
-  A. Westfeld, “1 steganography and multilateral security,” Multilateral Security in Communications, vol. 3, pp. 223–231, 1999.
-  I. S. Kohane, “Using electronic health records to drive discovery in disease genomics.” Nature Reviews Genetics, vol. 12, no. 6, pp. 417–28, 2011.
-  P. B. Jensen, L. J. Jensen, and S. Brunak, “Mining electronic health records: towards better research applications and clinical care,” Nature Reviews Genetics, vol. 13, no. 6, p. 395, 2012.
-  L. Maaten and G. Hinton, “Visualizing non-metric similarities in multiple maps,” Machine Learning, vol. 87, no. 1, pp. 33–55, 2012.
P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.”Journal of Machine Learning Research, vol. 11, no. 12, pp. 3371–3408, 2010.
-  A. Lally, S. Bagchi, M. A. Barborak, D. W. Buchanan, J. Chu-Carroll, D. A. Ferrucci, M. R. Glass, A. Kalyanpur, E. T. Mueller, and J. W. Murdock, “Watsonpaths: Scenario-based question answering and inference over unstructured information,” Ibm Corporation, vol. 38, no. 2, pp. 59–76, 2014.
-  B. Middleton, M. A. Shwe, D. E. Heckerman, M. Henrion, E. J. Horvitz, H. P. Lehmann, and G. F. Cooper, “Probabilistic diagnosis using a reformulation of the internist-1/qmr knowledge base. ii. evaluation of diagnostic performance.” Methods of Information in Medicine, vol. 30, no. 4, pp. 256–267, 1991.
-  L. V. D. Maaten, Accelerating t-SNE using tree-based algorithms. JMLR.org, 2014.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan,
V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in
Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
-  N. Kalchbrenner, E. Grefenstette, and P. Blunsom, “A convolutional neural network for modelling sentences,” arXiv preprint arXiv:1404.2188, 2014.
-  M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European conference on computer vision. Springer, 2014, pp. 818–833.
-  Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural computation, vol. 1, no. 4, pp. 541–551, 1989.
-  P. Nigam, “Applying deep learning to icd-9 multi-label classification from medical records.”
-  P. Chen, A. Barrera, and C. Rhodes, “Semantic analysis of free text and its application on automatically assigning icd-9-cm codes to patient records,” in Cognitive Informatics (ICCI), 2010 9th IEEE International Conference on. IEEE, 2010, pp. 68–74.
-  A. Stubbs and Ö. Uzuner, “Annotating risk factors for heart disease in clinical narratives for diabetic patients,” Journal of biomedical informatics, vol. 58, pp. S78–S91, 2015.
-  H. J. Murff, F. FitzHenry, M. E. Matheny, N. Gentry, K. L. Kotter, K. Crimin, R. S. Dittus, A. K. Rosen, P. L. Elkin, S. H. Brown et al., “Automated identification of postoperative complications within an electronic medical record using natural language processing,” Jama, vol. 306, no. 8, pp. 848–855, 2011.
-  I. Goldstein, A. Arzumtsyan, and Ö. Uzuner, “Three approaches to automatic assignment of icd-9-cm codes to radiology reports,” in AMIA Annual Symposium Proceedings, vol. 2007. American Medical Informatics Association, 2007, p. 279.
-  C. for Disease Control, Prevention et al., “International classification of diseases, ninth revision, clinical modification (icd-9-cm),” URL: http://www. cdc. gov/nchs/about/otheract/icd9/abticd9. htm [accessed 2004 Dec 16], 2013.
-  J. P. Pestian, C. Brew, P. Matykiewicz, D. J. Hovermale, N. Johnson, K. B. Cohen, and W. Duch, “A shared task involving multi-label classification of clinical free text,” in Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing. Association for Computational Linguistics, 2007, pp. 97–104.
-  Y. U. Zhenshan, L. Huang, Z. Chen, L. I. Lingjun, W. Yang, and X. Zhao, “High embedding ratio text steganography by ci-poetry of the song dynasty,” Journal of Chinese Information Processing, 2009.
-  Y. Liu, J. Wang, Z. Wang, Q. Qu, and S. Yu, A Technique of High Embedding Rate Text Steganography Based on Whole Poetry of Song Dynasty. Springer International Publishing, 2016.
-  A. Desoky and M. Younis, “Chestega: chess steganography methodology,” Security & Communication Networks, vol. 2, no. 6, pp. 555–566, 2009.
-  A. Desoky, “Jokestega: Automatic joke generation-based steganography methodology,” International Journal of Security & Networks, vol. 7, no. 3, pp. 148–160, 2012.
D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,”Computer Science, 2014.
-  AbdelrahmanDesoky, “Notestega: Notes-based steganography methodology,” Information Systems Security, vol. 18, no. 4, pp. 178–193, 2009.
-  A. Majumder and S. Changder, “A novel approach for text steganography: Generating text summary using reflection symmetry ☆,” Procedia Technology, vol. 10, no. 10, pp. 112–120, 2013.
-  O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: A neural image caption generator,” in Computer Vision and Pattern Recognition, 2015, pp. 3156–3164.
-  S. Mahato, D. A. Khan, and D. K. Yadav, “A modified approach to data hiding in microsoft word documents by change-tracking technique,” Journal of King Saud University - Computer and Information Sciences, 2017.
-  G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks.” in CVPR, vol. 1, no. 2, 2017, p. 3.
-  G. Larsson, M. Maire, and G. Shakhnarovich, “Fractalnet: Ultra-deep neural networks without residuals,” arXiv preprint arXiv:1605.07648, 2016.
-  J. Lubacz, W. Mazurczyk, and K. Szczypiorski, “Principles and overview of network steganography,” IEEE Communications Magazine, vol. 52, no. 5, pp. 225–229, 2014.
-  T. Y. Liu and W. H. Tsai, “A new steganographic method for data hiding in microsoft word documents by a change tracking technique,” IEEE Transactions on Information Forensics & Security, vol. 2, no. 1, pp. 24–30, 2007.
-  Z. Zhou, Y. Mu, N. Zhao, Q. M. J. Wu, and C. N. Yang, Coverless Information Hiding Method Based on Multi-keywords. Springer International Publishing, 2016.
-  D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Readings in Cognitive Science, vol. 323, no. 6088, pp. 399–421, 1988.
-  R. Stutsman, C. Grothoff, M. Atallah, and K. Grothoff, “Lost in just the translation,” in ACM Symposium on Applied Computing, 2006, pp. 338–345.
W. Feller, “Law of large numbers for identically distributed variables,”
An introduction to probability theory and its applications, vol. 2, pp. 231–234, 1971.
-  W. Dai, Y. Yu, Y. Dai, and B. Deng, “Text steganography system using markov chain source model and des algorithm.” JSW, vol. 5, no. 7, pp. 785–792, 2010.
-  D. Jurafsky, Speech & language processing. Pearson Education India, 2000.
-  A. Thompson, “Kaggle,” https://www.kaggle.com/snapcrack/all-the-news/data.
-  D. A. Huffman, “A method for the construction of minimum-redundancy codes,” Proceedings of the IRE, vol. 40, no. 9, pp. 1098–1101, 1952.
-  X. Chen, H. Sun, Y. Tobe, Z. Zhou, and X. Sun, “Coverless information hiding method based on the chinese mathematical expression,” in International Conference on Cloud Computing and Security. Springer, 2015, pp. 133–143.
-  I. Avcibas, “Audio steganalysis with content-independent distortion measures,” IEEE Signal Processing Letters, vol. 13, no. 2, pp. 92–95, 2006.
-  A. Desoky, “Comprehensive linguistic steganography survey,” International Journal of Information & Computer Security, vol. 4, no. 2, 2010.
-  K. Bennett, “Linguistic steganography: Survey, analysis, and robustness concerns for hiding information in text,” 2004.
-  D. Zou and Y. Q. Shi, “Formatted text document data hiding robust to printing, copying and scanning,” in IEEE International Symposium on Circuits and Systems, 2005, pp. 4971–4974 Vol. 5.
-  Y. Luo and Y. Huang, “Text steganography with high embedding rate: Using recurrent neural networks to generate chinese classic poetry,” in ACM Workshop on Information Hiding and Multimedia Security, 2017, pp. 99–104.
-  X. Peng, Y. Huang, and F. Li, “A steganography scheme in a low-bit rate speech codec based on 3d-sudoku matrix,” in IEEE International Conference on Communication Software and Networks, 2016, pp. 13–18.
-  X. Ge, R. Jiao, H. Tian, and J. Wang, “Research on information hiding,” US-China Education Review, vol. 3, no. 5, pp. 77–81, 2006.
-  D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” Computer Science, 2014.
-  J. Blitzer, M. Dredze, and F. Pereira, “Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification.” in ACL 2007, Proceedings of the Meeting of the Association for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic, 2007, pp. 187–205.
-  S. H. Low, N. F. Maxemchuk, and A. M. Lapone, “Document identification for copyright protection using centroid detection,” IEEE Transactions on Communications, vol. 46, no. 3, pp. 372–383, 1998.
A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts, “Learning word vectors for sentiment analysis,” inProceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1. Association for Computational Linguistics, 2011, pp. 142–150.
-  P. Wayner, “Mimic functions,” Cryptologia, vol. 16, no. 3, pp. 193–214, 1992.
-  Y. F. Huang, S. Tang, and J. Yuan, “Steganography in inactive frames of voip streams encoded by source codec,” IEEE Transactions on information forensics and security, vol. 6, no. 2, pp. 296–306, 2011.
-  Y. Huang, C. Liu, S. Tang, and S. Bai, “Steganography integration into a low-bit rate speech codec,” IEEE transactions on information forensics and security, vol. 7, no. 6, pp. 1865–1875, 2012.
-  H. Tian, J. Liu, and S. Li, “Improving security of quantization-index-modulation steganography in low bit-rate speech streams,” Multimedia systems, vol. 20, no. 2, pp. 143–154, 2014.
-  H. H. Moraldo, “An approach for text steganography based on markov chains,” arXiv preprint arXiv:1409.0915, 2014.
-  M. H. Shirali-Shahreza and M. Shirali-Shahreza, “A new synonym text steganography,” in Intelligent Information Hiding and Multimedia Signal Processing, 2008. IIHMSP’08 International Conference on. IEEE, 2008, pp. 1524–1526.
-  H. Z. Muhammad, S. M. S. A. A. Rahman, and A. Shakil, “Synonym based malay linguistic text steganography,” in Innovative Technologies in Intelligent Systems and Industrial Applications, 2009. CITISIA 2009. IEEE, 2009, pp. 423–427.
-  U. Topkara, M. Topkara, and M. J. Atallah, “The hiding virtues of ambiguity: quantifiably resilient watermarking of natural language text through synonym substitutions,” in Proceedings of the 8th workshop on Multimedia and security. ACM, 2006, pp. 164–174.
-  A. Go, R. Bhayani, and L. Huang, “Twitter sentiment classification using distant supervision,” CS224N Project Report, Stanford, vol. 1, no. 12, 2009.
-  N. Chotikakamthorn, “Electronic document data hiding technique using inter-character space,” in Circuits and Systems, 1998. IEEE APCCAS 1998. The 1998 IEEE Asia-Pacific Conference on, 1998, pp. 419–422.
-  J. Fridrich, Steganography in digital media: principles, algorithms, and applications. Cambridge University Press, 2009.
-  N. F. Johnson and P. A. Sallee, “Detection of hidden information, covert channels and information flows,” Wiley Handbook of Science and Technology for Homeland Security, 2008.
-  M. H. Shirali-Shahreza and M. Shirali-Shahreza, “A new approach to persian/arabic text steganography,” in Ieee/acis International Conference on Computer and Information Science and Ieee/acis International Workshop on Component-Based Software Engineering,software Architecture and Reuse, 2006, pp. 310–315.
-  M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” vol. 8689, pp. 818–833, 2013.
S. Hochreiter, “The vanishing gradient problem during learning recurrent neural nets and problem solutions,”International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 06, no. 02, pp. –, 1998.
-  N. Nikolaidis and I. Pitas, Robust image watermarking in the spatial domain. Elsevier North-Holland, Inc., 1998.
S. Hochreiter and J. Schmidhuber, “Long short-term memory,”Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
-  J. Zhang, Y. Xie, L. Wang, and H. Lin, “Coverless text information hiding method using the frequent words distance,” 2017.
-  J. Zhang, J. Shen, L. Wang, and H. Lin, “Coverless text information hiding method based on the word rank map,” in International Conference on Cloud Computing and Security, 2016, pp. 145–155.
-  I. J. Cox and M. L. Miller, “The first 50 years of electronic watermarking,” Eurasip Journal on Advances in Signal Processing, vol. 2002, no. 2, pp. 1–7, 2001.
-  S. Bhattacharyya, “A survey of steganography and steganalysis technique in image, text, audio and video as cover carrier,” Journal of global research in computer science, vol. 2, no. 4, 2011.
-  S. K. Bandyopadhyay, D. Bhattacharyya, D. Ganguly, S. Mukherjee, and P. Das, “A tutorial review on steganography,” in International conference on contemporary computing, vol. 101, 2008, pp. 105–114.
-  N. Meghanathan and L. Nayak, “Steganalysis algorithms for detecting the hidden information in image, audio and video cover media,” international journal of Network Security & Its application (IJNSA), vol. 2, no. 1, pp. 43–55, 2010.
-  A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov, “Bag of tricks for efficient text classification,” arXiv preprint arXiv:1607.01759, 2016.
-  Y. H. Y. Z. H. L. Zhongliang Yang, Shuyu Jin, “Automatically generate steganographic text based on markov model and huffman coding,” arXiv preprint arXiv:1811.04720, 2018.
-  S. Li, Y. Jia, and C.-C. J. Kuo, “Steganalysis of qim steganography in low-bit-rate speech signals,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 5, pp. 1011–1022, 2017.
-  Z. Lin, Y. Huang, and J. Wang, “Rnn-sm: Fast steganalysis of voip streams using recurrent neural network,” IEEE Transactions on Information Forensics & Security, vol. PP, no. 99, pp. 1–1, 2018.
-  B. Xiao, Y. Huang, and S. Tang, “An approach to information hiding in low bit-rate speech stream,” in Global Telecommunications Conference, 2008. IEEE GLOBECOM 2008. IEEE. IEEE, 2008, pp. 1–5.
-  S.-b. Li, H.-z. Tao, and Y.-f. Huang, “Detection of quantization index modulation steganography in g. 723.1 bit stream based on quantization index sequence analysis,” Journal of Zhejiang University SCIENCE C, vol. 13, no. 8, pp. 624–634, 2012.
-  Z. Yang, X. Guo, Z. Chen, Y. Huang, and Y.-J. Zhang, “Rnn-stega: Linguistic steganography based on recurrent neural networks,” IEEE Transactions on Information Forensics and Security, 2018.
-  C. Wang and Q. Wu, “Information hiding in real-time voip streams,” in Multimedia, 2007. ISM 2007. Ninth IEEE International Symposium on. IEEE, 2007, pp. 255–262.
-  X. Luo, E. W. Chan, and R. K. Chang, “Tcp covert timing channels: Design and detection.” in DSN, 2008, pp. 420–429.
-  S. H. Sellke, C.-C. Wang, S. Bagchi, and N. B. Shroff, “Covert tcp/ip timing channels: theory to implementation,” in Proceedings of the 28th conference on computer communications (INFOCOM), 2009, pp. 2204–2212.
-  K. Ahsan, “Covert channel analysis and data hiding in tcp/ip,” Canada, University of Toronto, 2002.
-  B. Chen and G. W. Wornell, “Quantization index modulation: A class of provably good methods for digital watermarking and information embedding,” IEEE Transactions on Information Theory, vol. 47, no. 4, pp. 1423–1443, 2001.
-  N. Aoki, “A band extension technique for g. 711 speech using steganography,” IEICE transactions on communications, vol. 89, no. 6, pp. 1896–1898, 2006.
-  Z. Lin, Y. Huang, and J. Wang, “Rnn-sm: Fast steganalysis of voip streams using recurrent neural network,” IEEE Transactions on Information Forensics & Security, vol. PP, no. 99, pp. 1–1, 2018.