Adversarial Attacks on Node Embeddings

09/04/2018
by   Aleksandar Bojcheski, et al.
0

The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks. However, despite the proliferation of such methods there is currently no study of their robustness to adversarial attacks. We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks. We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks. We further show that our attacks are transferable -- they generalize to many models -- and are successful even when the attacker has restricted actions.

READ FULL TEXT
research
02/14/2021

Adversarial Attack on Network Embeddings via Supervised Network Poisoning

Learning low-level node embeddings using techniques from network represe...
research
05/21/2018

Adversarial Attacks on Classification Models for Graphs

Deep learning models for graphs have achieved strong performance for the...
research
09/16/2022

A Systematic Evaluation of Node Embedding Robustness

Node embedding methods map network nodes to low dimensional vectors that...
research
08/28/2023

Adversarial Attacks on Foundational Vision Models

Rapid progress is being made in developing large, pretrained, task-agnos...
research
05/21/2018

Adversarial Attacks on Neural Networks for Graph Data

Deep learning models for graphs have achieved strong performance for the...
research
03/11/2020

How Powerful Are Randomly Initialized Pointcloud Set Functions?

We study random embeddings produced by untrained neural set functions, a...
research
10/13/2021

Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based Vertical Federated Learning

Graph neural network (GNN) models have achieved great success on graph r...

Please sign up or login with your details

Forgot password? Click here to reset