Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release

02/16/2021
by   Liam Fowl, et al.
0

Large organizations such as social media companies continually release data, for example user images. At the same time, these organizations leverage their massive corpora of released data to train proprietary models that give them an edge over their competitors. These two behaviors can be in conflict as an organization wants to prevent competitors from using their own data to replicate the performance of their proprietary models. We solve this problem by developing a data poisoning method by which publicly released data can be minimally modified to prevent others from train-ing models on it. Moreover, our method can be used in an online fashion so that companies can protect their data in real time as they release it.We demonstrate the success of our approach onImageNet classification and on facial recognition.

READ FULL TEXT
research
11/30/2017

KIBS Innovative Entrepreneurship Networks on Social Media

The analysis of the use of social media for innovative entrepreneurship ...
research
01/20/2021

LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition

Facial recognition systems are increasingly deployed by private corporat...
research
04/25/2018

Real-Time Inference of User Types to Assist with More Inclusive Social Media Activism Campaigns

Social media provides a mechanism for people to engage with social cause...
research
12/29/2021

Invertible Image Dataset Protection

Deep learning has achieved enormous success in various industrial applic...
research
07/03/2020

WordPress on AWS: a Communication Framework

Every organization needs to communicate with its audience, and social me...
research
08/23/2020

Leveraging Organizational Resources to Adapt Models to New Data Modalities

As applications in large organizations evolve, the machine learning (ML)...

Please sign up or login with your details

Forgot password? Click here to reset