Analysis of the Scalability of a Deep-Learning Network for Steganography "Into the Wild"
Since the emergence of deep learning and its adoption in steganalysis fields, most of the reference articles kept using small to medium size CNN, and learn them on relatively small databases. Therefore, benchmarks and comparisons between different deep learning-based steganalysis algorithms, more precisely CNNs, are thus made on small to medium databases. This is performed without knowing: 1. if the ranking, with a criterion such as accuracy, is always the same when the database is larger, 2. if the efficiency of CNNs will collapse or not if the training database is a multiple of magnitude larger, 3. the minimum size required for a database or a CNN, in order to obtain a better result than a random guesser. In this paper, after a solid discussion related to the observed behaviour of CNNs as a function of their sizes and the database size, we confirm that the error's power-law also stands in steganalysis, and this in a border case, i.e. with a medium-size network, on a big, constrained and very diverse database.
READ FULL TEXT