Spoofing Generalization: When Can't You Trust Proprietary Models?

06/15/2021
by   Ankur Moitra, et al.
0

In this work, we study the computational complexity of determining whether a machine learning model that perfectly fits the training data will generalizes to unseen data. In particular, we study the power of a malicious agent whose goal is to construct a model g that fits its training data and nothing else, but is indistinguishable from an accurate model f. We say that g strongly spoofs f if no polynomial-time algorithm can tell them apart. If instead we restrict to algorithms that run in n^c time for some fixed c, we say that g c-weakly spoofs f. Our main results are 1. Under cryptographic assumptions, strong spoofing is possible and 2. For any c> 0, c-weak spoofing is possible unconditionally While the assumption of a malicious agent is an extreme scenario (hopefully companies training large models are not malicious), we believe that it sheds light on the inherent difficulties of blindly trusting large proprietary models or data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2023

Can large-scale vocoded spoofed data improve speech spoofing countermeasure with a self-supervised front end?

A speech spoofing countermeasure (CM) that discriminates between unseen ...
research
03/06/2020

Defense against adversarial attacks on spoofing countermeasures of ASV

Various forefront countermeasure methods for automatic speaker verificat...
research
11/25/2020

Whac-A-Mole: Six Years of DNS Spoofing

DNS is important in nearly all interactions on the Internet. All large D...
research
09/18/2023

Spoofing attack augmentation: can differently-trained attack models improve generalisation?

A reliable deepfake detector or spoofing countermeasure (CM) should be r...
research
10/06/2021

MToFNet: Object Anti-Spoofing with Mobile Time-of-Flight Data

In online markets, sellers can maliciously recapture others' images on d...
research
05/31/2023

How to Construct Perfect and Worse-than-Coin-Flip Spoofing Countermeasures: A Word of Warning on Shortcut Learning

Shortcut learning, or `Clever Hans effect` refers to situations where a ...

Please sign up or login with your details

Forgot password? Click here to reset