Differential Privacy and Natural Language Processing to Generate Contextually Similar Decoy Messages in Honey Encryption Scheme

by   Kunjal Panchal, et al.

Honey Encryption is an approach to encrypt the messages using low min-entropy keys, such as weak passwords, OTPs, PINs, credit card numbers. The ciphertext is produces, when decrypted with any number of incorrect keys, produces plausible-looking but bogus plaintext called "honey messages". But the current techniques used in producing the decoy plaintexts do not model human language entirely. A gibberish, random assortment of words is not enough to fool an attacker; that will not be acceptable and convincing, whether or not the attacker knows some information of the genuine source. In this paper, I focus on the plaintexts which are some non-numeric informative messages. In order to fool the attacker into believing that the decoy message can actually be from a certain source, we need to capture the empirical and contextual properties of the language. That is, there should be no linguistic difference between real and fake message, without revealing the structure of the real message. I employ natural language processing and generalized differential privacy to solve this problem. Mainly I focus on machine learning methods like keyword extraction, context classification, bags-of-words, word embeddings, transformers for text processing to model privacy for text documents. Then I prove the security of this approach with e-differential privacy.


page 1

page 2

page 3

page 4


Author Obfuscation Using Generalised Differential Privacy

The problem of obfuscating the authorship of a text document has receive...

Differential Privacy in Natural Language Processing: The Story So Far

As the tide of Big Data continues to influence the landscape of Natural ...

Frequency Estimation in the Shuffle Model with (Almost) a Single Message

We present a protocol in the shuffle model of differential privacy for t...

On Privacy and Confidentiality of Communications in Organizational Graphs

Machine learned models trained on organizational communication data, suc...

Differentially Private Histograms in the Shuffle Model from Fake Users

There has been much recent work in the shuffle model of differential pri...

Privacy Guarantees for De-identifying Text Transformations

Machine Learning approaches to Natural Language Processing tasks benefit...

Guiding Text-to-Text Privatization by Syntax

Metric Differential Privacy is a generalization of differential privacy ...

Please sign up or login with your details

Forgot password? Click here to reset