Using Neural Generative Models to Release Synthetic Twitter Corpora with Reduced Stylometric Identifiability of Users

06/03/2016
by   Alexander G. Ororbia II, et al.
0

We present a method for generating synthetic versions of Twitter data using neural generative models. The goal is to protect individuals in the source data from stylometric re-identification attacks while still releasing data that carries research value. To generate tweet corpora that maintain user-level word distributions, our proposed approach augments powerful neural language models with local parameters that weight user-specific inputs. We compare our work to two standard text data protection methods: redaction and iterative translation. We evaluate the three methods on risk and utility. We define risk following the stylometric models of re-identification, and we define utility based on two general language measures and two common text analysis tasks. We find that neural models are able to significantly lower risk over previous methods at the cost of some utility. More importantly, we show that the risk utility trade-off depends on how the neural model's logits (or the unscaled pre-activation values of the output layer) are scaled. This work presents promising results for a new tool addressing the problem of privacy for free text and sharing social media data in a way that respects privacy and is ethically responsible.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset