Using contrastive learning to improve the performance of steganalysis schemes

by   Yanzhen Ren, et al.

To improve the detection accuracy and generalization of steganalysis, this paper proposes the Steganalysis Contrastive Framework (SCF) based on contrastive learning. The SCF improves the feature representation of steganalysis by maximizing the distance between features of samples of different categories and minimizing the distance between features of samples of the same category. To decrease the computing complexity of the contrastive loss in supervised learning, we design a novel Steganalysis Contrastive Loss (StegCL) based on the equivalence and transitivity of similarity. The StegCL eliminates the redundant computing in the existing contrastive loss. The experimental results show that the SCF improves the generalization and detection accuracy of existing steganalysis DNNs, and the maximum promotion is 2 time of using the StegCL is 10 supervised learning.


page 1

page 2

page 3

page 4


Deep Bregman Divergence for Contrastive Learning of Visual Representations

Deep Bregman divergence measures divergence of data points using neural ...

Similarity and Generalization: From Noise to Corruption

Contrastive learning aims to extract distinctive features from data by f...

Probabilistic Contrastive Loss for Self-Supervised Learning

This paper proposes a probabilistic contrastive loss function for self-s...

Contrastive Training for Improved Out-of-Distribution Detection

Reliable detection of out-of-distribution (OOD) inputs is increasingly u...

Faint Features Tell: Automatic Vertebrae Fracture Screening Assisted by Contrastive Learning

Long-term vertebral fractures severely affect the life quality of patien...

Contrastive Learning for OOD in Object detection

Contrastive learning is commonly applied to self-supervised learning, an...

Contrastive Learning for Lifted Networks

In this work we address supervised learning via lifted network formulati...