Using contrastive learning to improve the performance of steganalysis schemes

03/01/2021
by   Yanzhen Ren, et al.
0

To improve the detection accuracy and generalization of steganalysis, this paper proposes the Steganalysis Contrastive Framework (SCF) based on contrastive learning. The SCF improves the feature representation of steganalysis by maximizing the distance between features of samples of different categories and minimizing the distance between features of samples of the same category. To decrease the computing complexity of the contrastive loss in supervised learning, we design a novel Steganalysis Contrastive Loss (StegCL) based on the equivalence and transitivity of similarity. The StegCL eliminates the redundant computing in the existing contrastive loss. The experimental results show that the SCF improves the generalization and detection accuracy of existing steganalysis DNNs, and the maximum promotion is 2 time of using the StegCL is 10 supervised learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/15/2021

Deep Bregman Divergence for Contrastive Learning of Visual Representations

Deep Bregman divergence measures divergence of data points using neural ...
01/30/2022

Similarity and Generalization: From Noise to Corruption

Contrastive learning aims to extract distinctive features from data by f...
12/02/2021

Probabilistic Contrastive Loss for Self-Supervised Learning

This paper proposes a probabilistic contrastive loss function for self-s...
07/10/2020

Contrastive Training for Improved Out-of-Distribution Detection

Reliable detection of out-of-distribution (OOD) inputs is increasingly u...
08/23/2022

Faint Features Tell: Automatic Vertebrae Fracture Screening Assisted by Contrastive Learning

Long-term vertebral fractures severely affect the life quality of patien...
08/12/2022

Contrastive Learning for OOD in Object detection

Contrastive learning is commonly applied to self-supervised learning, an...
05/07/2019

Contrastive Learning for Lifted Networks

In this work we address supervised learning via lifted network formulati...