A Surprising Density of Illusionable Natural Speech

06/03/2019
by   Melody Y. Guan, et al.
14

Recent work on adversarial examples has demonstrated that most natural inputs can be perturbed to fool even state-of-the-art machine learning systems. But does this happen for humans as well? In this work, we investigate: what fraction of natural instances of speech can be turned into "illusions" which either alter humans' perception or result in different people having significantly different perceptions? We first consider the McGurk effect, the phenomenon by which adding a carefully chosen video clip to the audio channel affects the viewer's perception of what is said (McGurk and MacDonald, 1976). We obtain empirical estimates that a significant fraction of both words and sentences occurring in natural speech have some susceptibility to this effect. We also learn models for predicting McGurk illusionability. Finally we demonstrate that the Yanny or Laurel auditory illusion (Pressnitzer et al., 2018) is not an isolated occurrence by generating several very different new instances. We believe that the surprising density of illusionable natural speech warrants further investigation, from the perspectives of both security and cognitive science.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2021

Audio Adversarial Examples: Attacks Using Vocal Masks

We construct audio adversarial examples on automatic Speech-To-Text syst...
research
09/14/2015

Natural scene statistics mediate the perception of image complexity

Humans are sensitive to complexity and regularity in patterns. The subje...
research
09/11/2018

Isolated and Ensemble Audio Preprocessing Methods for Detecting Adversarial Examples against Automatic Speech Recognition

An adversarial attack is an exploitative process in which minute alterat...
research
11/18/2021

A Review of Adversarial Attack and Defense for Classification Methods

Despite the efficiency and scalability of machine learning systems, rece...
research
11/22/2019

Universal adversarial examples in speech command classification

Adversarial examples are inputs intentionally perturbed with the aim of ...
research
03/27/2020

Can you hear me now? Sensitive comparisons of human and machine perception

The rise of sophisticated machine-recognition systems has brought with i...

Please sign up or login with your details

Forgot password? Click here to reset