Your Room is not Private: Gradient Inversion Attack for Deep Q-Learning

06/15/2023
by   Miao Li, et al.
0

The prominence of embodied Artificial Intelligence (AI), which empowers robots to navigate, perceive, and engage within virtual environments, has attracted significant attention, owing to the remarkable advancements in computer vision and large language models. Privacy emerges as a pivotal concern within the realm of embodied AI, as the robot access substantial personal information. However, the issue of privacy leakage in embodied AI tasks, particularly in relation to decision-making algorithms, has not received adequate consideration in research. This paper aims to address this gap by proposing an attack on the Deep Q-Learning algorithm, utilizing gradient inversion to reconstruct states, actions, and Q-values. The choice of using gradients for the attack is motivated by the fact that commonly employed federated learning techniques solely utilize gradients computed based on private user data to optimize models, without storing or transmitting the data to public servers. Nevertheless, these gradients contain sufficient information to potentially expose private data. To validate our approach, we conduct experiments on the AI2THOR simulator and evaluate our algorithm on active perception, a prevalent task in embodied AI. The experimental results convincingly demonstrate the effectiveness of our method in successfully recovering all information from the data across all 120 room layouts.

READ FULL TEXT

page 2

page 7

page 12

page 13

page 14

research
06/13/2023

Temporal Gradient Inversion Attacks with Robust Optimization

Federated Learning (FL) has emerged as a promising approach for collabor...
research
03/11/2021

TAG: Transformer Attack from Gradient

Although federated learning has increasingly gained attention in terms o...
research
07/05/2022

Disentangling private classes through regularization

Deep learning models are nowadays broadly deployed to solve an incredibl...
research
11/25/2021

Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning

Split learning is a popular technique used to perform vertical federated...
research
07/27/2021

Towards Industrial Private AI: A two-tier framework for data and model security

With the advances in 5G and IoT devices, the industries are vastly adopt...
research
10/15/2020

R-GAP: Recursive Gradient Attack on Privacy

Federated learning frameworks have been regarded as a promising approach...
research
08/30/2021

A New Lever Function with Adequate Indeterminacy

The key transform of the REESSE1+ asymmetrical cryptosystem is Ci = (Ai ...

Please sign up or login with your details

Forgot password? Click here to reset