Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods

11/04/2021
by   Peru Bhardwaj, et al.
0

Despite the widespread use of Knowledge Graph Embeddings (KGE), little is known about the security vulnerabilities that might disrupt their intended behaviour. We study data poisoning attacks against KGE models for link prediction. These attacks craft adversarial additions or deletions at training time to cause model failure at test time. To select adversarial deletions, we propose to use the model-agnostic instance attribution methods from Interpretable Machine Learning, which identify the training instances that are most influential to a neural model's predictions on test instances. We use these influential triples as adversarial deletions. We further propose a heuristic method to replace one of the two entities in each influential triple to generate adversarial additions. Our experiments show that the proposed strategies outperform the state-of-art data poisoning attacks on KGE models and improve the MRR degradation due to the attacks by up to 62

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/11/2021

Poisoning Knowledge Graph Embeddings via Relation Inference Patterns

We study the problem of generating data poisoning attacks against Knowle...
research
05/21/2018

Adversarial Attacks on Neural Networks for Graph Data

Deep learning models for graphs have achieved strong performance for the...
research
08/21/2023

Spear and Shield: Adversarial Attacks and Defense Methods for Model-Based Link Prediction on Continuous-Time Dynamic Graphs

Real-world graphs are dynamic, constantly evolving with new interactions...
research
03/19/2021

Attribution of Gradient Based Adversarial Attacks for Reverse Engineering of Deceptions

Machine Learning (ML) algorithms are susceptible to adversarial attacks ...
research
09/30/2022

Adversarial Robustness of Representation Learning for Knowledge Graphs

Knowledge graphs represent factual knowledge about the world as relation...
research
03/23/2022

An Empirical Study of Memorization in NLP

A recent study by Feldman (2020) proposed a long-tail theory to explain ...
research
08/21/2018

Are You Tampering With My Data?

We propose a novel approach towards adversarial attacks on neural networ...

Please sign up or login with your details

Forgot password? Click here to reset