Automatic Ensemble Learning for Online Influence Maximization

11/25/2019
by   Xiaojin Zhang, et al.
0

We consider the problem of selecting a seed set to maximize the expected number of influenced nodes in the social network, referred to as the influence maximization (IM) problem. We assume that the topology of the social network is prescribed while the influence probabilities among edges are unknown. In order to learn the influence probabilities and simultaneously maximize the influence spread, we consider the tradeoff between exploiting the current estimation of the influence probabilities to ensure certain influence spread and exploring more nodes to learn better about the influence probabilities. The exploitation-exploration trade-off is the core issue in the multi-armed bandit (MAB) problem. If we regard the influence spread as the reward, then the IM problem could be reduced to the combinatorial multi-armed bandits. At each round, the learner selects a limited number of seed nodes in the social network, then the influence spreads over the network according to the real influence probabilities. The learner could observe the activation status of the edge if and only if its start node is influenced, which is referred to as the edge-level semi-bandit feedback. Two classical bandit algorithms including Thompson Sampling and Epsilon Greedy are used to solve this combinatorial problem. To ensure the robustness of these two algorithms, we use an automatic ensemble learning strategy, which combines the exploration strategy with exploitation strategy. The ensemble algorithm is self-adaptive regarding that the probability of each algorithm could be adjusted based on the historical performance of the algorithm. Experimental evaluation illustrates the effectiveness of the automatically adjusted hybridization of exploration algorithm with exploitation algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2022

Provably Efficient Reinforcement Learning for Online Adaptive Influence Maximization

Online influence maximization aims to maximize the influence spread of a...
research
02/27/2015

Influence Maximization with Bandits

We consider the problem of influence maximization, the problem of maximi...
research
06/24/2020

Online Competitive Influence Maximization

Online influence maximization has attracted much attention as a way to m...
research
03/01/2023

Containing a spread through sequential learning: to exploit or to explore?

The spread of an undesirable contact process, such as an infectious dise...
research
03/15/2017

Selective Harvesting over Networks

Active search (AS) on graphs focuses on collecting certain labeled nodes...
research
06/09/2019

Factorization Bandits for Online Influence Maximization

We study the problem of online influence maximization in social networks...
research
01/28/2023

Causal Influence Maximization in Hypergraph

Influence Maximization (IM) is the task of selecting a fixed number of s...

Please sign up or login with your details

Forgot password? Click here to reset