ToupleGDD: A Fine-Designed Solution of Influence Maximization by Deep Reinforcement Learning

by   Tiantian Chen, et al.

Online social platforms have become more and more popular, and the dissemination of information on social networks has attracted wide attention of the industries and academia. Aiming at selecting a small subset of nodes with maximum influence on networks, the Influence Maximization (IM) problem has been extensively studied. Since it is #P-hard to compute the influence spread given a seed set, the state-of-art methods, including heuristic and approximation algorithms, faced with great difficulties such as theoretical guarantee, time efficiency, generalization, etc. This makes it unable to adapt to large-scale networks and more complex applications. With the latest achievements of Deep Reinforcement Learning (DRL) in artificial intelligence and other fields, a lot of works has focused on exploiting DRL to solve the combinatorial optimization problems. Inspired by this, we propose a novel end-to-end DRL framework, ToupleGDD, to address the IM problem in this paper, which incorporates three coupled graph neural networks for network embedding and double deep Q-networks for parameters learning. Previous efforts to solve the IM problem with DRL trained their models on the subgraph of the whole network, and then tested their performance on the whole graph, which makes the performance of their models unstable among different networks. However, our model is trained on several small randomly generated graphs and tested on completely different networks, and can obtain results that are very close to the state-of-the-art methods. In addition, our model is trained with a small budget, and it can perform well under various large budgets in the test, showing strong generalization ability. Finally, we conduct entensive experiments on synthetic and realistic datasets, and the experimental results prove the effectiveness and superiority of our model.


page 1

page 12


A Survey on Influence Maximization: From an ML-Based Combinatorial Optimization

Influence Maximization (IM) is a classical combinatorial optimization pr...

Finding Influencers in Complex Networks: An Effective Deep Reinforcement Learning Approach

Maximizing influences in complex networks is a practically important but...

Deep Reinforcement Learning for Multi-objective Optimization

This study proposes an end-to-end framework for solving multi-objective ...

Learning What to Defer for Maximum Independent Sets

Designing efficient algorithms for combinatorial optimization appears ub...

Contingency-Aware Influence Maximization: A Reinforcement Learning Approach

The influence maximization (IM) problem aims at finding a subset of seed...

FastCover: An Unsupervised Learning Framework for Multi-Hop Influence Maximization in Social Networks

Finding influential users in social networks is a fundamental problem wi...

GraMeR: Graph Meta Reinforcement Learning for Multi-Objective Influence Maximization

Influence maximization (IM) is a combinatorial problem of identifying a ...

Please sign up or login with your details

Forgot password? Click here to reset