De-anonymization Attacks on Neuroimaging Datasets

08/08/2019
by   Vikram Ravindra, et al.
0

Advances in imaging technologies, combined with inexpensive storage, have led to an explosion in the volume of publicly available neuroimaging datasets. Effective analyses of these images hold the potential for uncovering mechanisms that govern functioning of the human brain, and understanding various neurological diseases and disorders. The potential significance of these studies notwithstanding, a growing concern relates to the protection of privacy and confidentiality of subjects who participate in these studies. In this paper, we present a de-anonymization attack rooted in the innate uniqueness of the structure and function of the human brain. We show that the attack reveals not only the identity of an individual, but also the task they are performing, and their efficacy in performing the tasks. Our attack relies on novel matrix analyses techniques that are used to extract discriminating features in neuroimages. These features correspond to individual-specific signatures that can be matched across datasets to yield highly accurate identification. We present data preprocessing, signature extraction, and matching techniques that are computationally inexpensive, and can scale to large datasets. We discuss implications of the attack and challenges associated with defending against such attacks.

READ FULL TEXT

page 4

page 12

page 14

page 15

research
05/22/2018

Constructing Compact Brain Connectomes for Individual Fingerprinting

Recent neuroimaging studies have shown that functional connectomes are u...
research
04/10/2021

Deep Learning Identifies Neuroimaging Signatures of Alzheimer's Disease Using Structural and Synthesized Functional MRI Data

Current neuroimaging techniques provide paths to investigate the structu...
research
05/16/2019

Efficient Attack Correlation and Identification of Attack Scenarios based on Network-Motifs

An Intrusion Detection System (IDS) to secure computer networks reports ...
research
10/18/2021

Conditional De-Identification of 3D Magnetic Resonance Images

Privacy protection of medical image data is challenging. Even if metadat...
research
05/28/2021

Chromatic and spatial analysis of one-pixel attacks against an image classifier

One-pixel attack is a curious way of deceiving neural network classifier...
research
01/11/2022

Feature Space Hijacking Attacks against Differentially Private Split Learning

Split learning and differential privacy are technologies with growing po...
research
04/19/2016

Cognitive state classification using transformed fMRI data

One approach, for understanding human brain functioning, is to analyze t...

Please sign up or login with your details

Forgot password? Click here to reset