Computer graphics are becoming a more and more important medium in the interactive techniques such as Virtual Reality (VR) and 3D printing. In order to protect the copyright of the 3D objects, the digital watermarks maybe embedded into the 3D objects. 3D watermarking and steganographic algorithms seek to hide information in 3D objects, which can then be used for a variety of applications. Information is hidden in 3D objects for various purposes: copyright protection, defining behaviour characteristics similar to the DNA in beings, or hiding certain information for marketing purpose, etc. Moreover, the 3D objects can be the carriers of a covert communication channel when the 3D steganography is applied. While watermarking seeks to robustly embed rather smaller codes, steganography would embed larger payload messages without enforcing robustness. All these approaches aim to hide information in such a way that the changes they produce to the 3D objects are not visible. On the other hand steganalysis algorithms are being developed in order to find whether information is embedded in a certain media data. Several steganalysis algorithms have been proposed for audio signals [1, 2, 3], digital images [4, 5, 6, 7, 8] and video [9, 10, 11]. While 3D objects can be represented in various ways, their most usual data representation is by means of meshes. Such irregular representations modelling complex 3D shapes are very different from the regular structural array representing audio, digital images or video. Consequently, the existing image and video steganalysis algorithms cannot be successfully applied to 3D objects.
Most research on 3D watermarking involves modifying certain geometrical properties of the object, most often in a statistical manner, in such a way that there are no visible changes. Research on 3D watermarking started in 1997 when Ohbuchi et al. proposed two 3D information hiding algorithms using ratios of local geometric measurements . Cho et al. 
proposed two blind robust watermarking algorithms based on modifying the mean and variance of the distribution of the vertices’ radial distances in the spherical coordinate system. Cayre and Macq proposed a steganographic approach for 3D triangle meshes in whose key idea is to consider each triangle from the mesh as a two-state geometrical object embedding a bit. Luo and Bors proposed changing the statistics of geodesic distance distributions in 3D objects . The same authors proposed minimizing the surface distortions in 3D watermarking by using an optimization algorithm in . Among the information hiding algorithms, we mention a multi-layer 3D steganographic method  which embeds large payloads using vertices’ projections onto the principal axis of the object. This steganographic method can make use of several layers for embedding the information increasing thus significantly the embedded payload. However, not all bits embedded through this method are retrievable and some are lost. More recently, Yang et al.  proposed a steganalysis-resistant watermarking algorithm which embeds the payload by changing the histogram of the radial coordinates of the vertices. This watermarking method produces less embedding distortion in the 3D objects when compared to the methods proposed in . A distortion-free steganographic algorithm  embeds the information into the meshes by permuting the order in which faces and vertices are stored. However, this algorithm is not robust to the vertex-reordering attack.
3D steganalysis only received very recently the attention of the scientific community. The 3D steganalysis approach proposed in  considered various features including the norms of vertices in the Cartesian and Laplacian coordinate systems , the dihedral angle of faces adjacent to the same edge, and the face normal. Parameters representing the statistics of these features were used as inputs to a quadratic classifier. Yang et al. , proposed a new steganalysis algorithm, specifically designed for the mean-based watermarking algorithm from 
. This steganalysis algorithm first estimates the number of bins through exhaustive search and then detects the presence of the secret message by a tailor-made normality test. A steganalytic approaches which is designed specially for addressing the cover source mismatch scenario in 3D steganalysis by selecting the features which are robust to the variations of the cover source was proposed in. Cryptanalysis aspects of 3D watermarking in a larger context have been discussed in the review paper from .
The aims of a steganalyzer are difficult to achieve because the stego and cover-obejcts are supposed to be almost identical under human visual observation assumptions. Moreover, the steganalyzer should be able to find such subtle changes, when any hiding algorithm may have been used in a large diversity of 3D shapes. The ability of a steganalyzer to generalize from the training set to a large testing set is a very demanding requirement as well. The steganalyzer proposed in this study has the following processing stages: extracting and combining multiple 3D features, statistical modelling of the features and the classifier. We propose some new features to be used for 3D steganalysis such as the local curvatures, normal vectors calculated at vertex locations, as well as using the vertex representation in the spherical coordinate system for 3D steganalytic feature representation. The statistics of sets of 3D features are then fed into machine learning algorithms. Fisher Linear Discriminant (FLD) ensemble  is the most used classifier for image steganalysis because of its powerful ability to find quickly the non-linear separation boundaries between features characterizing the cover- and stego-images. In this research study we propose to use FLD ensemble as well as the Support Vector Machine (SVM) in 3D steganalysis. The discriminating ability for 3D steganalysis of these classifiers is then compared against the quadratic classifier proposed in . The description of the 3D steganalysis framework formulated in this study is provided in Section II. The 3D feature set, used by the steganalyzer is presented in detail in Section III and the algorithms used for training the steganalyzers in Section IV. The experimental results are provided in Section V, while the conclusions of this study are outlined in Section VI.
Ii 3D Steganalysis Framework
In this section, we will give a brief introduction of the 3D steganalysis framework. Let us assume that we have a mesh representing the shape of a 3D object, considered as a cover mesh and its corresponding watermarked stego mesh , containing the vertex sets and , where and represent the number of vertices in the cover-object and stego-object , their face sets F, , and their edge sets E and , respectively. The steganalysis framework is treated as a machine learning problem, consisting of training and testing stages. The training of the steganalyzer has the following processing steps: calibration, feature extraction and learning, as illustrated in Figure 1. These processing steps produce a parameter set discriminating between the 3D objects carrying hidden information and those that are not. The testing stage includes the same calibration and feature extraction steps as in the training stage, while applying the parameters learnt during the training on the features extracted from sets of test objects.
Firstly, a series of preprocessing steps are used in order to ensure that the cover-object and stego-object are normalized such that their size is constrained within a cube with sides of one. In the case of image steganalyzers, it was observed that the difference between the stego-image and its smoothed version is more significant than the difference between the cover-image and its corresponding smoothed version [26, 27]. Similarly, it is expected that the difference between a mesh and its smoothed version is larger for a stego mesh than for a cover mesh. In most 3D watermarking algorithms, the changes produced to the stego-object, following the watermark embedding can be associated to noise. Consequently, when smoothing a cover mesh, the resulting modifications will be smaller than those obtained when smoothing its corresponding stego mesh. We consider Laplacian smoothing, for both the cover-object and the stego-object , resulting in their smoothed versions and stego-object .
Features, characterizing the local geometry of 3D objects are extracted before and after smoothing the stego-objects and cover-objects, respectively. The discriminative features should be chosen such that they capture effectively the differences between the two versions of mesh for a given object. In Section III we propose to use a set of new 3D features for steganalysis. Then, statistics of differences between the features extracted from the cover-object and the smoothed cover-object , are compared with those of differences between the features extracted from the stego-object and its smoothed version20]. The statistics of the various combinations of 3D features are eventually used as inputs to machine learning algorithms. The classifier separates the feature space defining the stego-objects from that of the cover-objects. As in the case of supervised computational intelligence algorithms, we have a training stage, where a set of parameters is estimated, and then a testing stage, where the classifier, using the learnt set of parameters is applied on a different data set. Firstly, specific feature vectors are estimated from sets of cover- and stego-meshes, corresponding to the same set of 3D objects. It has been shown that breaking the cover-stego pairs correspondence may lead to a suboptimal performance . The features should represent properties that would discriminate the cover-objects from their stego-objects counterparts. Furthermore, choosing the appropriate machine learning algorithm and its training procedure are crucial, as steganalyzers trained by different machine learning methods can provide different results on identical training sets. In this research study we propose to use the FLD ensemble  and Support Vector Machine (SVM) methods for training the steganalyzer.
Iii Features for 3D Steganalysis
3D watermarking and steganographic methods are specifically designed to embed information in a way that does not visibly alter the surface of the objects [15, 16]. Nevertheless, the changes produced in the 3D objects surface may be identified by steganalysis. Depending on the specific algorithm used, such changes could be randomly distributed on the surface of the 3D mesh  or they could be located specifically in certain regions of the object . Artefacts produced in objects, following the information hiding procedure, could be assimilated to low level protuberances on mesh surfaces and consequently could be identified by feature detection algorithms. In the following we outline some 3D local features which can be used for identifying whether objects have been watermarked or not. Such feature detectors range from very simple vertex displacement measurements to algorithms that take into account the local neighbourhoods and measure specific shape characteristics.
Iii-a The YANG40 Features
The 40-dimensional feature vector YANG40 contains the most effective features from YANG208, used in , which correspond to the statistics of features evaluated from the vertices, edges and faces that make up the given meshes. For YANG40 we remove certain features, which provide lower performance, from YANG208 and abandon the strategy used in  which treats the vertices with valence less, equal, or greater than six separately to reduce the dimensionality.
Let us denote by , the feature vector representing differences between the cover-object and its smoothed version . Similarly, we evaluate , measuring the differences between the stego-object and its smoothed version . The first six components of represent the absolute distance, measured along each coordinate axis between the locations of vertices of the meshes and , in both the Cartesian and Laplacian coordinate systems :
where and represent the -coordinate of in Cartesian and Laplacian coordinate systems, respectively, . Next, we evaluate the changes produced in the Euclidean distance between vertex locations and the center of the object, representing the vertex norms. The absolute differences between the vertex norms of pairs of corresponding vertices in the meshes and are calculated as:
where , , represent the vector norms in Cartesian and Laplacian coordinates, respectively, for .
Another feature evaluates the local mesh surface variation by calculating the changes in the orientations of faces adjacent to the same edge. This is measured by the absolute differences between the dihedral angles of neighbouring faces, calculated in the plane perpendicular on the common edge , where represents the number of edges part of the object :
where the calculation of the dihedral angle is illustrated in Figure 2. Changes in the local surface orientation are measured by calculating the angle between the surface normals , of the faces from the cover-object , and their correspondents , from the smoothed cover-object :
where . The 40-dimensional feature vector YANG40 represents the first four statistical moments: mean, variance, skewness and kurtosis of the logarithm of the ten vectors , described above.
Iii-B The Vertex Normal and Curvature Features
In the following we propose to use some additional 3D features. The vertex normal is the weighted sum of the normals of all faces that contain the vertex . A vertex normal is shown in Figure 2 and is computed as:
where represents the -th face that contains the vertex , represents its area, and are the two edges containing in the face . The change between the vertex normals is calculated as a dot product:
where is the normal for a vertex from the smoothed object .
Next we consider the local shape curvatures, calculated according to the the Gaussian curvature and the curvature ratio formula used in 
. In differential geometry, the two principal curvatures of a surface are provided by the eigenvalues of the shape operator, calculated at the location of a vertex using the vertices from its first neighbourhood. Such curvatures measure how the local surface bends by different amounts in orthogonal directions at that point. The Gaussian curvature is defined as:
where is the minimum principal curvature and is the maximum principal curvature at a given point . A special case is that of singularity in the shape operator, when we have a linear dependency in one direction or in both. In this case we have locally a planar region, which is characterized by a linear relationship among its coordinates and consequently by zero curvature. In our study we found that the curvature ratio proposed in , defined as
is effective to be used as a feature when training steganalyzers. The Gaussian curvature from equation (9) and the curvature ratio from (10) have been shown to be sensitive to very small mesh modifications and have been used to model 3D shape characteristics in various applications. The two principal curvatures are evaluated at the location of each vertex in the cover-object and for its corresponding vertex from the smoothed object . Their absolute differences represent the features and :
Iii-C The Spherical Coordinate Features
There are many information hiding algorithms would embed changes directly or indirectly in the spherical domain. Consequently, in the following we consider the spherical coordinate system for defining characteristics that can be used by the steganalyzers. We convert the 3D objects from the Cartesian coordinate system to the spherical coordinate system, considering the center of the object as its reference.
The spherical coordinate system specifies a point in the 3D space by a radius and two angles and the link to the Cartesian coordinate system is given by:
where represents the Cartesian coordinates of the vertex, and its spherical coordinates, representing , the Euclidean norm from a fixed origin, , the azimuth angle, while is the elevation angle, as illustrated in Figure 3. We compute the absolute differences of the spherical coordinates of all vertices, between the original object and the smoothed object in the spherical coordinate system:
where . The center of the spherical coordinate system is , representing the center of the 3D object calculated by averaging all the vertices in the object, as shown in Figure 3.
We also use statistics of the edges, defined in the spherical coordinate system. In this case, edges are defined by the differences in the spherical coordinates of the two vertices that define the edge ends:
where is the edge connecting vertices and , and . The corresponding features extracted from both the original object and its smoothed version are
where, for example, is obtained from the -th edge of the original object, is its corresponding edge from the smoothed object, for , is the total number of edges in object .
Firstly, we apply the logarithm on all features in order to reduce the range of their values and enforce a degree of evenness in their distribution. After applying the logarithm, we consider the four statistical moments, representing the mean, variance, skewness and kurtosis, of the logarithm of all the vertex normals, Gaussian curvatures, curvature ratios, and the spherical coordinate features calculated as indicated above, as in the case of the feature set YANG40, defined in Section III-A. In this way we define a vector set of 76 dimensions, which we call LFS76. The four order moments capture almost entirely the statistical characteristics of the distribution of the features, representing their center and the deviation from the center, as indicated by the mean and variance, respectively. The degree of symmetry in the logarithm of feature values is indicated by the skewness, while the level of peakedness and the presence of specific values in the statistical distribution is indicated by the kurtosis.
A subset of the proposed features set, LFS52, was used in . That feature set did not include the 24-dimensional feature vector extracted in the spherical coordinate system of 3D objects. A higher dimensional feature set, used in , is represented by the 208-dimensional vector defined as YANG208. This feature set considers separately the statistics of the first eight features described above, distinctly on vertex sets with valences less, equal, or greater than six. Moreover, YANG208 feature set considers the histogram differences of the ten features defined in Section III-A, as well.
Iv Training steganalyzers
In the following we describe how we can use machine learning methods as 3D steganalyzers, as illustrated in Figure 1. We consider three machine learning methods: Quadratic Discriminant Analysis (QDA), Fisher Linear Discriminant (FLD) ensemble, and Support Vector Machine (SVM), for training the steganalyzers using a training set of features extracted from pairs of stego-objects and cover-objects. The machine learning algorithms estimate the parameters defining the nonlinear separation surfaces between the spaces defined by the feature sets corresponding to the cover-objects, , and the stego-objects, .
QDA fits mixtures of multivariate Gaussian distributions to the given feature data ,:
where , , represent the mean and the covariance matrix of each Gaussian component. These functions are then used for modelling boundaries between the classes of stego and cover-object data spaces by means of a quadratic discriminative function:
is the prior probability of class, with .
The FLD ensemble classifier was successfully used in image steganalysis  and is characterized by a high detection accuracy of stego-images with a relatively low computational cost. The FLD ensemble consists of a set of base learners trained uniformly on randomly selected features from the feature space of and , corresponding to cover- and stego-objects. The random subspace dimensionality and the number of base learners is found by minimizing the out-of-bag (OOB) error, representing an estimate of the testing error calculated on bootstrap samples of the training set, .
Another steganalyzer considered in this study is the SVM with Gaussian kernels which is a well known classifier which can efficiently find the best separation boundary providing the optimal separation margin between two classes. The training of SVMs in the kernel space is performed by means of solving a convex optimization problem:
where represents the 3D object class (cover-objects or stego-objects), is a regularization parameter, and represents the offset to the origin of the coordinate system. The non-negative slack variables measure the degree of misclassification of data which are estimated using quadratic programming. The regularization parameter controls the trade-off between the training error and model complexity 
. The kernel considered is the Gaussian radial basis function:
where is the kernel scale parameter which can be seen as the inverse of the radius of influence of samples selected as support vectors. The selection of the kernel scale parameter and the regularization parameter used in the training of the proposed SVM-based 3D steganalyzers are estimated empirically, as described in the Appendix. A new data is classified using SVM according to the following formula:
An important property of the steganalyzers is their ability to avoid overfitting on the training set while generalizing well in the case of data sets which are not similar to those used for training.
V Experimental results
In the following we provide the experimental results of the proposed 3D steganalysis methodology. During the tests we consider detecting the information embedded in 3D objects by three different steganographic methods: the steganalysis-resistant 3D watermarking proposed in , the mean-based watermarking method from , and the high capacity embedding method proposed in . For the experiments we use the Princeton Mesh Segmentation project  database, which consists of 354 3D objects represented as meshes. The shapes of ten objects from this database are shown in Figure 4. For each steganalyzer, we split the 354 pairs of cover- and stego-objects into 260 pairs, used for training, and 94 pairs for testing. We consider 30 different splits of the given 3D object database, into the training and testing data sets. The results are given by considering two measurements. The first is the median value of the sum of false negatives (missed detections) and false positives (false alarms) from all 30 trials, while the other one is the median value of the area under the Receiver Operating Curves (ROC) of the detection results, evaluated over the 30 splits of the data into training and testing sets.
During the pre-processing, we first apply three iterations of Laplacian smoothing on both cover- and stego-objects, by considering the updating weight of 0.3. Next stage consists in feature extraction and their statistical modelling. We consider the proposed feature set LFS76, discussed in Section III and compare their results against YANG208, proposed in , and its simplified version, called YANG40. We also consider the feature sets combining YANG40 and the vertex normal feature, VNF4, representing the mean, variance, skewness and kurtosis of from equation (8), the combination of YANG40 and the curvature feature, CF8, representing the mean, variance, skewness and kurtosis of and from equations (11) and (12). We also compare the LFS76 with the feature set proposed in our previous work , LFS52, which consists of YANG40, VNF4 and CF8 features.
Figures 5 (a) and (b) show the histograms of the dihedral angles , calculated according to equation (5), for the cover- and stego-object, respectively, for the object “Head statue” shown in Figure 7(f). The histograms of the logarithm of these features are shown in Figures 5 (c) and (d), for the cover- and stego-objects, respectively. Figures 6 (a) and (b) show the histograms of the vertex normal calculated according to equation (8), while Figures 6 (c) and (d) show the corresponding histogram of logarithms for the cover-object “Horse” shown in Figure 4, and its corresponding stego-object embedded by the steganographic method from . From these figures, we can observe following the application of the logarithm, the distributions of feature components and
become similar to that of normal distributions where it is easier to model the differences between the distributions of cover- and stego-objects when using the four statistical moments of mean, variance, skewness and kurtosis.
The steganalyzers are trained as binary classifiers implemented using three methods: the Quadratic Discriminant Analysis (QDA), Fisher Linear Discriminant (FLD) ensemble and the Support Vector Machine (SVM) with Gaussian kernel, described in Section IV. The quadratic discriminant that fits multivariate normal densities with covariance estimates  was used in  as well. The implementation of FLD ensemble is an extension of the version proposed in . When training the SVM classifiers, the optimal values for the parameters from (18) and from (22), are found by grid-search using five-fold cross validation, as detailed in the Appendix. The implementation of SVM is based on LIBSVM .
V-a Steganalysis of the information hiding methods
During the generation of the stego-objects using the steganalysis-resistant watermarking method proposed in , we consider multiple values for the parameter which determines the number of bins in the histogram of the radial distance parameter of the vertex where information is embedded. According to , the upper bound of the embedding capacity is . In our experiments we set the parameter and thus obtain multiple sets of stego-objects. Another parameter in the watermarking method from  is which controls the robustness of the embedding method. In order to keep the distortion of the embedding to a relatively low level, we set the parameter as 20. If the smallest number of the elements in the bins from the objects is less than 20, we would choose equal to the smallest nonzero number of the elements in the bins. Examples of stego-objects obtained using the embedding method in  are shown in Figure 7(a) and Figure 8(a), where . The absolute differences of the vertex normals, the curvature ratios, the azimuth angles and the radial distances between the stego-object and its corresponding cover-object, representing the features , , and , detected on these stego-objects are shown in Figures 7 (b), (c), (d) and (e), and Figures 8 (b), (c), (d) and (e), for the two objects “Head statue” and “Horse,” respectively.
Figure 9 shows the detection errors for the watermarking method  using the three steganalyzers, QDA, SVM and FLD ensembles, trained with the six combinations of feature sets, formed as mentioned above. It can be seen from Figure 9 that the LFS76 shows best performance among the six combinations, when using any of the three machine learning methods. We have observed that as the value of increases, the detection error tends to increase as well. This happens because the larger will lead to fewer elements in each bin, so less vertices need to be changed for embedding a single bit.
For the mean-based watermarking method from , we consider various values for the watermark strength and message payload while fixing the incremental step size to . An example of a stego-object obtained using the watermarking method from  is shown in Figure 7 (f), where the watermark strength factor is set as and the message payload as 64 bits. The absolute differences of the features, , , and , between the stego-object and its corresponding cover-object are shown in Figures 7 (g), (h), (i) and (j). From these figures it can be observed that each feature identifies specific differences between the cover- and stego-objects, which usually do not overlap with each other.
Figure 10 depicts the median value of the detection errors of the watermarking algorithm proposed in  using the steganalyzers trained as QDA classifiers, FLD ensembles and SVM classifiers over 6 different feature combinations, chosen as described above, and applied on the testing set for 30 independent data splits. In Figures 10 (a), (b) and (c) we show the results when the watermarking strengths are 0.02, 0.04, 0.06, 0.08 and 0.1, while the message length is fixed to 64 bits. From these figures we can observe that as the watermarking strength increases,all steganalyzers provide better detection accuracy. This is due to the fact that more significant changes are produced in the 3D object surface, by watermarks that have stronger embedding parameters. Comparing the feature sets, it is evident that YANG40 has better performance than YANG208 when using the QDA and SVM classifiers. Although YANG40 is a simplified version of YANG208, it preserves the most effective feature subsets in YANG208 and reduces the dimensionality in order to avoid overfitting. Combining either VNF4 or CF8 with YANG40 would get better performance than just by using YANG40. After adding the spherical coordinate features to the features set LFS52, which was used in , the proposed LFS76 feature set achieves the best steganalysis performance. In Figures 10 (b), (d) and (f) we show the results when increasing the message payload from 16 to 32, 48 and 64 bits, while keeping the watermarking strength as 0.04. The LFS76 feature set provides much better detection results than the other feature sets in all the cases when testing the steganalyzers.
When using the high-capacity steganographic method from  we increase the number of layers from 1 to 10, and we consider the number of intervals as 10000. Increasing the number of embedding layers in this steganographic method corresponds to increasing the payload capacity. During the embedding, all the vertices in the mesh are used as payload carriers, except for three vertices which are used as references for the extraction process. Examples of a stego-object obtained using the steganographic method from  is shown in Figure 7 (k), where the number of layers is 10. Absolute differences of the features, , , and , between the stego-object and its corresponding cover-object are shown in Figures 7 (l), (m), (n) and (o).
The results provided by the three steganalyzers, using the QDA classifier, FLD ensemble and SVM, when increasing the number of layers are provided in the plots from Figures 11 (a), (b) and (c). From these plots it can be observed that the proposed set of features LFS76 used for training FLD ensemble provides the best results for any number of layers used for embedding. When the steganalyzers are trained as QDA classifiers, the LFS76 provides best results in most cases, except when the number of layers is 2, 5, or 9. When the steganalyzers are trained as SVM classifiers, the feature set LFS76’s performance is similar to that achieved when using LFS52. It can be observed that the advantage of LFS76 with respect to LFS52 in detecting the steganographic method from  is not that high as that achieved in detecting the changes produced by the two watermarking methods from  and . This is because the embedding method from  does not produce the modifications in the spherical coordinate system, which makes the spherical coordinate features less useful for detecting the changes produced by embedding in this case. Another interesting point is that the detection error for  does not decline when the embedding capacity increases. The reason for this is that, according to the multi-layer embedding framework applied in , the distortions produced to the objects are always controlled during the embedding.
V-B Analysing the efficiency of features for steganalysis
In order to investigate the contribution of different categories of features from the set LFS76 to the steganalysis, we use the relevance between the feature vectors and the class label in order to assess each feature’s importance. The measurement of the relevance is addressed by using the Pearson correlation coefficient,
where is the -th feature of a given feature set, , where is the dimensionality of the input feature, is the class label indicating whether the class corresponds to a cover or a stego object, represents the covariance and
is the standard deviation of. The Pearson correlation coefficient is well known as a measure of the linear dependence between two variables . Then we set as the value of the relevance, where indicates a highly linear relationship between the feature and the class label, corresponding to a better discriminant ability of that feature.
The analysis is conducted on the features extracted from the 354 cover objects used above and three sets of corresponding stego objects which are produced by the watermarking and steganographic algorithms from ,  and , respectively. We set the parameter in the steganalysis-resistant watermarking algorithm from  as 128. For the watermarking method from , in order to find a balance between the watermarking strength and its undetectability, we set the watermarking strength as 0.04 and embed a payload of 64 bits. In the case when using the steganographic method from , we consider ten layers of embedding.
We split the features from the set LFS76 into 10 categories according to their representation of the local shape geometry: 1, the vertex position in the Cartesian coordinate system ( and ); 2, the vertex norm in the Cartesian coordinate system (); 3, the vertex position in the Laplace coordinate system ( and ); 4, the vertex norm in Laplace coordinate system (); 5, the face normal (); 6, the dihedral angle (); 7, the vertex normal (); 8, the curvature ( and ); 9, the vertex position in spherical coordinates system ( and ); 10, the edge length in the spherical coordinate system ( and ). The relevance of all the features in LFS76 Then, the relevance for all features from LFS72, is calculated according to (24), and the averaged relevances of the features in each category are shown in Figure 12. From Figure 12 we can observe that the new proposed features, represented by labels 7, 8, 9, 10, have relatively high relevance to their class label. More specifically, in Figure 12 (a), the features characterizing the edge length in the spherical coordinate system (label 10) achieve the highest relevance. Meanwhile, in Figures 12 (b) and (c), the vertex normal feature (label 7) obtains highest and second highest relevances. It is interesting that the relevance of the dihedral angle (label 6) shows high relevance to class label in the cases of  and , but shows very low relevance when the stego objects are generated by the watermarking method from . This may happen because all the vertices from a mesh are slightly changed by the 3D embedding methods from  and , while such changes are scattered among the vertices in the case of the method from , as it can be observed from Figures 7 (b) (g) and (l). In addition, the watermarking method from  embeds information by changing the histogram of the radial coordinates of the vertices, then the neighboring vertices tend to be shifted in the same direction, so the dihedral angles may be relatively preserved after the embedding. The watermarking method from  shifts the vertices in a similar way to , which explains why the relevance of dihedral angle in the case of  is lower than that of .
In the following, we increase the feature set used for training steganalyzer gradually, from YANG40 to LFS52 and then to LFS76, and compare with YANG208. YANG40 includes the features represented by labels 1-6 in Figure 12. Features represented by labels 1-8 form LFS52 , while labels 1-10 correspond to LFS76. In Figure 13 we show the Receiver Operating Curves (ROC) results, considering the YANG208, YANG40, LFS52, and LFS76 feature sets for training the three steganalyzers, when detecting the changes produced by the watermarking and steganographic algorithms from ,  and . The parameters of the information hiding algorithms are the same as above when we calculate the features’ relevance to the class label. The ROC curves in Figure 13 show the true positives against the false positive rates, found for various threshold settings. A larger area under the ROC curve means that the classifier has a better detection accuracy. It is shown in Figure 13 that with the increase of the new features to YANG40, the feature sets LFS52 and LFS76 achieve better results and surpass the performance of YANG208. On the whole, the proposed feature set, LFS76, provides the best results in all nine cases with three different classifiers when assessing the changes embedded by all three information hiding algorithms.
|Feature sets||Steganalyzers for method proposed in |
|Feature sets||Steganalyzers for method proposed in |
|Feature sets||Steganalyzers for method proposed in |
|QDA classifier||SVM||FLD ensemble|
Tables I, II, and III provide the median values and the standard deviations of the area under the ROC curves for the steganalysis methods when using six combinations of feature sets for 30 independent splits of the training/testing set. It can be seen from Tables I, II and III that the areas under the ROC curves of the steganalyzers increase with the addition of new features, such as VNF4 and CF4, to the YANG40 feature set. The benefit of adding VNF4 is slightly better than adding CF8 in general. After adding the features corresponding to the spherical coordinate system to LFS52, the usage of LFS76 feature set for 3D steganalysis results in larger areas under the ROC curves than any other combination of the features, in most of the cases. Meanwhile, the FLD ensemble acquire the best performance among the three kinds of classifiers in the detection of the two watermarking methods proposed in  and . Among different combinations of feature sets and machine learning methods used for 3D steganalysis, we conclude that the steganalyzer using the feature set LFS76 and the SVM classifier produces the best results in the detection of the changes produced in 3D objects by the steganographic algorithm from .
The task of a 3D steganalyzer is very challenging because it has to find very small differences between stego-obejcts and cover-objects. In this research study, we propose to use the statistics of some new shape features as inputs for 3D steganalyzers. We analyze various local features used for 3D steganalysis by evaluating their relevance to the class label and by testing their performance in the experiments. The first four statistical moments in various 3D feature sets are used for training steganalyzers by three machine learning methods, namely, the quadratic discriminant, Fisher Linear Discriminant (FLD) ensemble, and the Support Vector Machine (SVM). After training, these steganalyzers are used for differentiating the stego-objects from the cover-objects. The experimental results show that the proposed 3D feature sets, when used as inputs to the SVM and FLD ensemble, provide the best results for the steganalysis of 3D embeddings produced by three different information hiding algorithms. In future studies we will assess the generalization ability of the proposed 3D steganalyzers.
When using the SVM with Gaussian kernel, two parameters need to be set prior to the training: the regularization parameter and the radius of the Gaussian kernel . We apply a “grid-search” on and using 5-fold cross-validation on the training set. The training set is generated by a random split of the data set with 260 pairs of cover- and stego-meshes. The search is firstly conducted in the grid defining the parameters:
If the best parameters lay on the boundary of the grid, the search will continues on an expanded grid.
Table IV gives the optimal values for the parameters for the detection of changes produced in 3D objects by the steganalysis-resistant watermarking method from  when using various values of K. Table V and VI provide the best parameter combinations , when varying the embedding strength and increasing the payload, respectively, where the watermarks are embedded by the watermarking method from . While Table VII provides the best choice of parameters, when varying the number of layers for embedding the payload by the steganographic method from .
|Payload||16 bits||32 bits||48 bits||64 bits|
-  Y. Ren, T. Cai, M. Tang, and L. Wang, “AMR steganalysis based on the probability of same pulse position,” IEEE Transactions on Information Forensics and Security, vol. PP, no. 99, pp. 1–11, 2015.
-  D. Yan, R. Wang, X. Yu, and J. Zhu, “Steganalysis for MP3Stego using differential statistics of quantization step,” Digital Signal Processing, vol. 23, no. 4, pp. 1181–1185, 2013.
-  Q. Liu, A. H. Sung, and M. Qiao, “Temporal derivative-based spectrum and mel-cepstrum audio steganalysis,” IEEE Transactions on Information Forensics and Security, vol. 4, no. 3, pp. 359–368, 2009.
-  T. Qiao, F. Retraint, R. Cogranne, and C. Zitzmann, “Steganalysis of JSteg algorithm using hypothesis testing theory,” EURASIP Journal on Information Security, vol. 2015, no. 1, pp. 1–16, 2015.
-  J. Lu, F. Liu, and X. Luo, “Recognizing F5-like stego images from multi-class jpeg stego images.” KSII Transactions on Internet & Information Systems, vol. 8, no. 11, pp. 4153–4169, 2014.
-  Z. Li, Z. Hu, X. Luo, and B. Lu, “Embedding change rate estimation based on ensemble learning,” in Proc. of ACM workshop on Information Hiding and Multimedia Security. ACM, 2013, pp. 77–84.
-  R. Cogranne, C. Zitzmann, L. Fillatre, F. Retraint, I. Nikiforov, and P. Cornu, “A cover image model for reliable steganalysis,” in Proc. of Information Hiding Conf., LNCS, vol. 6958. Springer, 2011, pp. 178–192.
-  G. Gul and F. Kurugollu, “SVD-based universal spatial domain image steganalysis,” IEEE Transactions on Information Forensics and Security, vol. 5, no. 2, pp. 349–353, 2010.
-  K. Wang, H. Zhao, and H. Wang, “Video steganalysis against motion vector-based steganography by adding or subtracting one motion vector value,” IEEE Transactions on Information Forensics and Security, vol. 9, no. 5, pp. 741–751, 2014.
-  Y. Cao, X. Zhao, and D. Feng, “Video steganalysis exploiting motion vector reversion-based features,” Signal Processing Letters, IEEE, vol. 19, no. 1, pp. 35–38, 2012.
-  U. Budhia, D. Kundur, and T. Zourntos, “Digital video steganalysis exploiting statistical visibility in the temporal domain,” IEEE Transactions on Information Forensics and Security, vol. 1, no. 4, pp. 502–516, 2006.
-  R. Ohbuchi, H. Masuda, and M. Aono, “Embedding data in 3D models,” in Interactive Distributed Multimedia Systems and Telecommunication Services. Springer, 1997, pp. 1–10.
-  J.-W. Cho, R. Prost, and H.-Y. Jung, “An oblivious watermarking for 3-D polygonal meshes using distribution of vertex norms,” IEEE Transactions on Signal Processing, vol. 55, no. 1, pp. 142–155, 2007.
-  F. Cayre and B. Macq, “Data hiding on 3-D triangle meshes,” IEEE Transactions on Signal Processing, vol. 51, no. 4, pp. 939–949, 2003.
-  M. Luo and A. G. Bors, “Surface-preserving robust watermarking of 3-D shapes,” IEEE Transactions on Image Processing, vol. 20, no. 10, pp. 2813–2826, 2011.
-  A. G. Bors and M. Luo, “Optimized 3D watermarking for minimal surface distortion,” IEEE Transactions on Image Processing, vol. 22, no. 5, pp. 1822–1835, 2013.
-  M.-W. Chao, C.-h. Lin, C.-W. Yu, and T.-Y. Lee, “A high capacity 3D steganography algorithm,” IEEE Transactions on Visualization and Computer Graphics, vol. 15, no. 2, pp. 274–284, 2009.
-  Y. Yang, R. Pintus, H. Rushmeier, and I. Ivrissimtzis, “A 3D steganalytic algorithm and steganalysis-resistant watermarking,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 2, pp. 1002–1013, Feb 2017.
-  A. Bogomjakov, C. Gotsman, and M. Isenburg, “Distortion-free steganography for polygonal meshes,” in Computer Graphics Forum, vol. 27, no. 2. Wiley Online Library, 2008, pp. 637–642.
-  Y. Yang and I. Ivrissimtzis, “Mesh discriminative features for 3D steganalysis,” ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 10, no. 3, pp. 27:1–27:13, 2014.
-  ——, “Polygonal mesh watermarking using Laplacian coordinates,” in Computer Graphics Forum, vol. 29, no. 5, 2010, pp. 1585–1593.
-  Y. Yang, R. Pintus, H. Rushmeier, and I. Ivrissimtzis, “A steganalytic algorithm for 3D polygonal meshes,” in Proc. of IEEE Int. Conf. on Image Processing, 2014, pp. 4782–4786.
Z. Li and A. G. Bors, “Selection of robust features for the cover source
mismatch problem in 3d steganalysis,” in
Proc. of the 23rd Int. Conf. on Pattern Recognition. IEEE, 2016, pp. 4251–4256.
-  V. Itier, W. Puech, and A. G. Bors, “Cryptanalysis aspects in 3-D watermarking,” in Proc. of IEEE Int. Conf. on Image Processing, 2014, pp. 4772–4776.
-  J. Kodovskỳ, J. Fridrich, and V. Holub, “Ensemble classifiers for steganalysis of digital media,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 2, pp. 432–444, 2012.
-  J. J. Fridrich, M. Goljan, and D. Hoga, “Steganalysis of JPEG images: Breaking the f5 algorithm,” in Proc. of Workshop of Information Hiding, LNCS, vol. 2578, 2002, pp. 310–323.
-  J. Kodovsky and J. J. Fridrich, “Calibration revisited,” in Proc. of ACM workshop on Multimedia and Security, 2009, pp. 63–74.
-  V. Schwamberger and M. O. Franz, “Simple algorithmic modifications for improving blind steganalysis performance,” in Proceedings of the 12th ACM workshop on Multimedia and security. ACM, 2010, pp. 225–230.
-  A. G. Bors, “Watermarking mesh-based representations of 3-D objects using local moments,” IEEE Transactions on Image Processing, vol. 15, no. 3, pp. 687–701, 2006.
-  P. R. Alface, B. Macq, and F. Cayre, “Blind and robust watermarking of 3-D models: How to withstand the cropping attack?” in Proc. of IEEE Int. Conf. Image Processing, 2007, pp. 465–468.
-  N. Max, “Weights for computing vertex normals from facet normals,” Journal of Graphics Tools, vol. 4, no. 2, pp. 1–6, 1999.
-  J. Rugis and R. Klette, “A scale invariant surface curvature estimator,” in Advances in Image and Video Technology. Springer, 2006, pp. 138–147.
-  S. Rusinkiewicz, “Estimating curvatures and their derivatives on triangle meshes,” in 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004. Proceedings. 2nd International Symposium on. IEEE, 2004, pp. 486–493.
-  Z. Li and A. G. Bors, “3D mesh steganalysis using local shape features,” in Proc. of IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), 2016, pp. 2144–2148.
Principles of multivariate analysis. Oxford University Press, 2000.
-  R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification. John Wiley & Sons, 2012.
-  C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, no. 3, pp. 273–297, 1995.
-  X. Chen, A. Golovinskiy, and T. Funkhouser, “A benchmark for 3D mesh segmentation,” in ACM Transactions on Graphics, vol. 28, no. 3, 2009, pp. 73:1–73:12.
-  R. Cogranne and J. Fridrich, “Modeling and extending the ensemble classifier for steganalysis of digital images using hypothesis testing theory,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 12, pp. 2627–2642, 2015.
-  C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, pp. 27:1–27:27, 2011, software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
M. A. Hall, “Correlation-based feature selection for machine learning,” Ph.D. dissertation, The University of Waikato, 1999.