NPGLM: A Non-Parametric Method for Temporal Link Prediction

06/21/2017
by   Sina Sajadmanesh, et al.
0

In this paper, we try to solve the problem of temporal link prediction in information networks. This implies predicting the time it takes for a link to appear in the future, given its features that have been extracted at the current network snapshot. To this end, we introduce a probabilistic non-parametric approach, called "Non-Parametric Generalized Linear Model" (NP-GLM), which infers the hidden underlying probability distribution of the link advent time given its features. We then present a learning algorithm for NP-GLM and an inference method to answer time-related queries. Extensive experiments conducted on both synthetic data and real-world Sina Weibo social network demonstrate the effectiveness of NP-GLM in solving temporal link prediction problem vis-a-vis competitive baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/27/2012

Nonparametric Link Prediction in Dynamic Networks

We propose a non-parametric link prediction algorithm for a sequence of ...
research
09/01/2018

Attack Tolerance of Link Prediction Algorithms: How to Hide Your Relations in a Social Network

Link prediction is one of the fundamental research problems in network a...
research
03/20/2021

Social Link Inference via Multi-View Matching Network from Spatio-Temporal Trajectories

In this paper, we investigate the problem of social link inference in a ...
research
05/25/2023

NODDLE: Node2vec based deep learning model for link prediction

Computing the probability of an edge's existence in a graph network is k...
research
02/25/2022

HTGN-BTW: Heterogeneous Temporal Graph Network with Bi-Time-Window Training Strategy for Temporal Link Prediction

With the development of temporal networks such as E-commerce networks an...
research
03/28/2023

A source separation approach to temporal graph modelling for computer networks

Detecting malicious activity within an enterprise computer network can b...
research
06/04/2021

Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning

We challenge a common assumption underlying most supervised deep learnin...

Please sign up or login with your details

Forgot password? Click here to reset