Superposition-Assisted Stochastic Optimization for Hawkes Processes
We consider the learning of multi-agent Hawkes processes, a model containing multiple Hawkes processes with shared endogenous impact functions and different exogenous intensities. In the framework of stochastic maximum likelihood estimation, we explore the associated risk bound. Further, we consider the superposition of Hawkes processes within the model, and demonstrate that under certain conditions such an operation is beneficial for tightening the risk bound. Accordingly, we propose a stochastic optimization algorithm assisted with a diversity-driven superposition strategy, achieving better learning results with improved convergence properties. The effectiveness of the proposed method is verified on synthetic data, and its potential to solve the cold-start problem of sequential recommendation systems is demonstrated on real-world data.
READ FULL TEXT