Increasing Adversarial Uncertainty to Scale Private Similarity Testing

by   Yiqing Hua, et al.

Social media and other platforms rely on automated detection of abusive content to help combat disinformation, harassment, and abuse. One common approach is to check user content for similarity against a server-side database of problematic items. However, this method fundamentally endangers user privacy. Instead, we target client-side detection, notifying only the users when such matches occur to warn them against abusive content. Our solution is based on privacy-preserving similarity testing. Existing approaches rely on expensive cryptographic protocols that do not scale well to large databases and may sacrifice the correctness of the matching. To contend with this challenge, we propose and formalize the concept of similarity-based bucketization(SBB). With SBB, a client reveals a small amount of information to a database-holding server so that it can generate a bucket of potentially similar items. The bucket is small enough for efficient application of privacy-preserving protocols for similarity. To analyze the privacy risk of the revealed information, we introduce a framework for measuring an adversary's ability to infer a predicate about the client input with good confidence. We develop a practical SBB protocol for image content, and evaluate its client privacy guarantee with real-world social media data. We then combine SBB with various similarity protocols, showing that SBB provides a speedup of at least 29x on large-scale databases, while retaining correctness of over 95



There are no comments yet.


page 1

page 2

page 3

page 4


VirtualIdentity: Privacy-Preserving User Profiling

User profiling from user generated content (UGC) is a common practice th...

Understanding the Tradeoffs in Client-Side Privacy for Speech Recognition

Existing approaches to ensuring privacy of user speech data primarily fo...

PrivateFetch: Scalable Catalog Delivery in Privacy-Preserving Advertising

In order to preserve the possibility of an Internet that is free at the ...

Privacy-Preserving Biometric Matching Using Homomorphic Encryption

Biometric matching involves storing and processing sensitive user inform...

Privacy-Preserving Identification via Layered Sparse Code Design: Distributed Servers and Multiple Access Authorization

We propose a new computationally efficient privacy-preserving identifica...

Secure and Utility-Aware Data Collection with Condensed Local Differential Privacy

Local Differential Privacy (LDP) is popularly used in practice for priva...

FAIRY: A Framework for Understanding Relationships between Users' Actions and their Social Feeds

Users increasingly rely on social media feeds for consuming daily inform...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.