-
Improving Differentially Private Models with Active Learning
Broad adoption of machine learning techniques has increased privacy conc...
read it
-
Differentially Private Distributed Learning for Language Modeling Tasks
One of the big challenges in machine learning applications is that train...
read it
-
Privacy Analysis in Language Models via Training Data Leakage Report
Recent advances in neural network based language models lead to successf...
read it
-
Learning to Customize Language Model for Generation-based dialog systems
Personalized conversation systems have received increasing attention rec...
read it
-
No-Regret Algorithms for Private Gaussian Process Bandit Optimization
The widespread proliferation of data-driven decision-making has ushered ...
read it
-
Analyzing Privacy Loss in Updates of Natural Language Models
To continuously improve quality and reflect changes in data, machine lea...
read it
-
Privacy-Preserving Graph Convolutional Networks for Text Classification
Graph convolutional networks (GCNs) are a powerful architecture for repr...
read it
Differentially Private Language Models Benefit from Public Pre-training
Language modeling is a keystone task in natural language processing. When training a language model on sensitive information, differential privacy (DP) allows us to quantify the degree to which our private data is protected. However, training algorithms which enforce differential privacy often lead to degradation in model quality. We study the feasibility of learning a language model which is simultaneously high-quality and privacy preserving by tuning a public base model on a private corpus. We find that DP fine-tuning boosts the performance of language models in the private domain, making the training of such models possible.
READ FULL TEXT
Comments
There are no comments yet.