Adversarial Attack on Network Embeddings via Supervised Network Poisoning

02/14/2021
by   Viresh Gupta, et al.
0

Learning low-level node embeddings using techniques from network representation learning is useful for solving downstream tasks such as node classification and link prediction. An important consideration in such applications is the robustness of the embedding algorithms against adversarial attacks, which can be examined by performing perturbation on the original network. An efficient perturbation technique can degrade the performance of network embeddings on downstream tasks. In this paper, we study network embedding algorithms from an adversarial point of view and observe the effect of poisoning the network on downstream tasks. We propose VIKING, a supervised network poisoning strategy that outperforms the state-of-the-art poisoning methods by upto 18 a semi-supervised attack setting and show that it is comparable to its supervised counterpart.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2018

Adversarial Attacks on Node Embeddings

The goal of network representation learning is to learn low-dimensional ...
research
05/20/2020

The Effects of Randomness on the Stability of Node Embeddings

We systematically evaluate the (in-)stability of state-of-the-art node e...
research
08/20/2021

Semi-supervised Network Embedding with Differentiable Deep Quantisation

Learning accurate low-dimensional embeddings for a network is a crucial ...
research
07/06/2021

Asymptotics of Network Embeddings Learned via Subsampling

Network data are ubiquitous in modern machine learning, with tasks of in...
research
07/20/2020

PanRep: Universal node embeddings for heterogeneous graphs

Learning unsupervised node embeddings facilitates several downstream tas...
research
12/11/2020

Pair-view Unsupervised Graph Representation Learning

Low-dimension graph embeddings have proved extremely useful in various d...
research
03/15/2022

Task-Agnostic Robust Representation Learning

It has been reported that deep learning models are extremely vulnerable ...

Please sign up or login with your details

Forgot password? Click here to reset