De-Anonymizing Text by Fingerprinting Language Generation

06/17/2020
by   Zhen Sun, et al.
0

Components of machine learning systems are not (yet) perceived as security hotspots. Secure coding practices, such as ensuring that no execution paths depend on confidential inputs, have not yet been adopted by ML developers. We initiate the study of code security of ML systems by investigating how nucleus sampling—a popular approach for generating text, used for applications such as auto-completion—unwittingly leaks texts typed by users. Our main result is that the series of nucleus sizes for many natural English word sequences is a unique fingerprint. We then show how an attacker can infer typed text by measuring these fingerprints via a suitable side channel (e.g., cache access times), explain how this attack could help de-anonymize anonymous texts, and discuss defenses.

READ FULL TEXT
research
07/13/2020

Security and Machine Learning in the Real World

Machine learning (ML) models deployed in many safety- and business-criti...
research
11/17/2022

CoLI-Machine Learning Approaches for Code-mixed Language Identification at the Word Level in Kannada-English Texts

The task of automatically identifying a language used in a given text is...
research
07/07/2020

An Advanced Approach for Choosing Security Patterns and Checking their Implementation

This paper tackles the problems of generating concrete test cases for te...
research
11/01/2018

The Natural Auditor: How To Tell If Someone Used Your Words To Train Their Model

To help enforce data-protection regulations such as GDPR and detect unau...
research
07/16/2021

Declarative Machine Learning Systems

In the last years machine learning (ML) has moved from a academic endeav...
research
01/30/2023

Using n-aksaras to model Sanskrit and Sanskrit-adjacent texts

Despite – or perhaps because of – their simplicity, n-grams, or contiguo...

Please sign up or login with your details

Forgot password? Click here to reset