Adversarial attacks on Copyright Detection Systems

06/17/2019
by   Parsa Saadatpanah, et al.
0

It is well-known that many machine learning models are susceptible to so-called "adversarial attacks," in which an attacker evades a classifier by making small perturbations to inputs. This paper discusses how industrial copyright detection tools, which serve a central role on the web, are susceptible to adversarial attacks. We discuss a range of copyright detection systems, and why they are particularly vulnerable to attacks. These vulnerabilities are especially apparent for neural network based systems. As a proof of concept, we describe a well-known music identification method, and implement this system in the form of a neural net. We then attack this system using simple gradient methods. Adversarial music created this way successfully fools industrial systems, including the AudioTag copyright detector and YouTube's Content ID system. Our goal is to raise awareness of the threats posed by adversarial examples in this space, and to highlight the importance of hardening copyright detection systems to attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2022

On the reversibility of adversarial attacks

Adversarial attacks modify images with perturbations that change the pre...
research
11/19/2020

Adversarial Threats to DeepFake Detection: A Practical Perspective

Facially manipulated images and videos or DeepFakes can be used maliciou...
research
05/24/2022

Defending a Music Recommender Against Hubness-Based Adversarial Attacks

Adversarial attacks can drastically degrade performance of recommenders ...
research
11/18/2022

Integrated Space Domain Awareness and Communication System

Space has been reforming and this evolution brings new threats that, tog...
research
03/05/2020

Detection and Recovery of Adversarial Attacks with Injected Attractors

Many machine learning adversarial attacks find adversarial samples of a ...
research
11/08/2019

Adversarial Attacks on GMM i-vector based Speaker Verification Systems

This work investigates the vulnerability of Gaussian Mix-ture Model (GMM...
research
01/05/2022

ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints

Advances in deep learning have enabled a wide range of promising applica...

Please sign up or login with your details

Forgot password? Click here to reset