Differential Privacy and Natural Language Processing to Generate Contextually Similar Decoy Messages in Honey Encryption Scheme

10/29/2020
by   Kunjal Panchal, et al.
0

Honey Encryption is an approach to encrypt the messages using low min-entropy keys, such as weak passwords, OTPs, PINs, credit card numbers. The ciphertext is produces, when decrypted with any number of incorrect keys, produces plausible-looking but bogus plaintext called "honey messages". But the current techniques used in producing the decoy plaintexts do not model human language entirely. A gibberish, random assortment of words is not enough to fool an attacker; that will not be acceptable and convincing, whether or not the attacker knows some information of the genuine source. In this paper, I focus on the plaintexts which are some non-numeric informative messages. In order to fool the attacker into believing that the decoy message can actually be from a certain source, we need to capture the empirical and contextual properties of the language. That is, there should be no linguistic difference between real and fake message, without revealing the structure of the real message. I employ natural language processing and generalized differential privacy to solve this problem. Mainly I focus on machine learning methods like keyword extraction, context classification, bags-of-words, word embeddings, transformers for text processing to model privacy for text documents. Then I prove the security of this approach with e-differential privacy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2018

Author Obfuscation Using Generalised Differential Privacy

The problem of obfuscating the authorship of a text document has receive...
research
08/17/2022

Differential Privacy in Natural Language Processing: The Story So Far

As the tide of Big Data continues to influence the landscape of Natural ...
research
11/12/2021

Frequency Estimation in the Shuffle Model with (Almost) a Single Message

We present a protocol in the shuffle model of differential privacy for t...
research
05/27/2021

On Privacy and Confidentiality of Communications in Organizational Graphs

Machine learned models trained on organizational communication data, suc...
research
04/06/2021

Differentially Private Histograms in the Shuffle Model from Fake Users

There has been much recent work in the shuffle model of differential pri...
research
08/07/2020

Privacy Guarantees for De-identifying Text Transformations

Machine Learning approaches to Natural Language Processing tasks benefit...
research
06/02/2023

Guiding Text-to-Text Privatization by Syntax

Metric Differential Privacy is a generalization of differential privacy ...

Please sign up or login with your details

Forgot password? Click here to reset