A Study on Extracting Named Entities from Fine-tuned vs. Differentially Private Fine-tuned BERT Models

12/07/2022
by   Andor Diera, et al.
0

Privacy preserving deep learning is an emerging field in machine learning that aims to mitigate the privacy risks in the use of deep neural networks. One such risk is training data extraction from language models that have been trained on datasets , which contain personal and privacy sensitive information. In our study, we investigate the extent of named entity memorization in fine-tuned BERT models. We use single-label text classification as representative downstream task and employ three different fine-tuning setups in our experiments, including one with Differentially Privacy (DP). We create a large number of text samples from the fine-tuned BERT models utilizing a custom sequential sampling strategy with two prompting strategies. We search in these samples for named entities and check if they are also present in the fine-tuning datasets. We experiment with two benchmark datasets in the domains of emails and blogs. We show that the application of DP has a huge effect on the text generation capabilities of BERT. Furthermore, we show that a fine-tuned BERT does not generate more named entities entities specific to the fine-tuning dataset than a BERT model that is pre-trained only. This suggests that BERT is unlikely to emit personal or privacy sensitive named entities. Overall, our results are important to understand to what extent BERT-based services are prone to training data extraction attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2022

Fine-Tuning with Differential Privacy Necessitates an Additional Hyperparameter Search

Models need to be trained with privacy-preserving learning algorithms to...
research
02/25/2021

PharmKE: Knowledge Extraction Platform for Pharmaceutical Texts using Transfer Learning

The challenge of recognizing named entities in a given text has been a v...
research
08/24/2023

Financial News Analytics Using Fine-Tuned Llama 2 GPT Model

The paper considers the possibility to fine-tune Llama 2 GPT large langu...
research
08/15/2021

Maps Search Misspelling Detection Leveraging Domain-Augmented Contextual Representations

Building an independent misspelling detector and serve it before correct...
research
10/01/2020

Beyond The Text: Analysis of Privacy Statements through Syntactic and Semantic Role Labeling

This paper formulates a new task of extracting privacy parameters from a...
research
04/18/2022

Ingredient Extraction from Text in the Recipe Domain

In recent years, there has been an increase in the number of devices wit...
research
08/31/2021

How Does Adversarial Fine-Tuning Benefit BERT?

Adversarial training (AT) is one of the most reliable methods for defend...

Please sign up or login with your details

Forgot password? Click here to reset