Revisiting the Threat Space for Vision-based Keystroke Inference Attacks

09/12/2020
by   John Lim, et al.
0

A vision-based keystroke inference attack is a side-channel attack in which an attacker uses an optical device to record users on their mobile devices and infer their keystrokes. The threat space for these attacks has been studied in the past, but we argue that the defining characteristics for this threat space, namely the strength of the attacker, are outdated. Previous works do not study adversaries with vision systems that have been trained with deep neural networks because these models require large amounts of training data and curating such a dataset is expensive. To address this, we create a large-scale synthetic dataset to simulate the attack scenario for a keystroke inference attack. We show that first pre-training on synthetic data, followed by adopting transfer learning techniques on real-life data, increases the performance of our deep learning models. This indicates that these models are able to learn rich, meaningful representations from our synthetic data and that training on the synthetic data can help overcome the issue of having small, real-life datasets for vision-based key stroke inference attacks. For this work, we focus on single keypress classification where the input is a frame of a keypress and the output is a predicted key. We are able to get an accuracy of 95.6 pre-training a CNN on our synthetic data and training on a small set of real-life data in an adversarial domain adaptation framework. Source Code for Simulator: https://github.com/jlim13/keystroke-inference-attack-synthetic-dataset-generator-

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

04/05/2022

Leveraging Disentangled Representations to Improve Vision-Based Keystroke Inference Attacks Under Low Data

Keystroke inference attacks are a form of side-channel attacks in which ...
09/04/2020

Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching

Data Poisoning attacks involve an attacker modifying training data to ma...
05/02/2021

Who's Afraid of Adversarial Transferability?

Adversarial transferability, namely the ability of adversarial perturbat...
05/13/2022

Learning Keypoints from Synthetic Data for Robotic Cloth Folding

Robotic cloth manipulation is challenging due to its deformability, whic...
04/23/2022

Smart App Attack: Hacking Deep Learning Models in Android Apps

On-device deep learning is rapidly gaining popularity in mobile applicat...
06/16/2021

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch

As the curation of data for machine learning becomes increasingly automa...
12/16/2021

Dataset correlation inference attacks against machine learning models

Machine learning models are increasingly used by businesses and organiza...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.