DeepAI
Log In Sign Up

De-Anonymizing Text by Fingerprinting Language Generation

06/17/2020
by   Zhen Sun, et al.
0

Components of machine learning systems are not (yet) perceived as security hotspots. Secure coding practices, such as ensuring that no execution paths depend on confidential inputs, have not yet been adopted by ML developers. We initiate the study of code security of ML systems by investigating how nucleus sampling—a popular approach for generating text, used for applications such as auto-completion—unwittingly leaks texts typed by users. Our main result is that the series of nucleus sizes for many natural English word sequences is a unique fingerprint. We then show how an attacker can infer typed text by measuring these fingerprints via a suitable side channel (e.g., cache access times), explain how this attack could help de-anonymize anonymous texts, and discuss defenses.

READ FULL TEXT
07/13/2020

Security and Machine Learning in the Real World

Machine learning (ML) models deployed in many safety- and business-criti...
07/07/2020

An Advanced Approach for Choosing Security Patterns and Checking their Implementation

This paper tackles the problems of generating concrete test cases for te...
11/01/2018

The Natural Auditor: How To Tell If Someone Used Your Words To Train Their Model

To help enforce data-protection regulations such as GDPR and detect unau...
07/16/2021

Declarative Machine Learning Systems

In the last years machine learning (ML) has moved from a academic endeav...
01/30/2023

Using n-aksaras to model Sanskrit and Sanskrit-adjacent texts

Despite – or perhaps because of – their simplicity, n-grams, or contiguo...