Transformer Embeddings of Irregularly Spaced Events and Their Participants

by   Chenghao Yang, et al.

The neural Hawkes process (Mei Eisner, 2017) is a generative model of irregularly spaced sequences of discrete events. To handle complex domains with many event types, Mei et al. (2020a) further consider a setting in which each event in the sequence updates a deductive database of facts (via domain-specific pattern-matching rules); future events are then conditioned on the database contents. They show how to convert such a symbolic system into a neuro-symbolic continuous-time generative model, in which each database fact and the possible event has a time-varying embedding that is derived from its symbolic provenance. In this paper, we modify both models, replacing their recurrent LSTM-based architectures with flatter attention-based architectures (Vaswani et al., 2017), which are simpler and more parallelizable. This does not appear to hurt our accuracy, which is comparable to or better than that of the original models as well as (where applicable) previous attention-based methods (Zuo et al., 2020; Zhang et al., 2020a).



There are no comments yet.


page 1

page 2

page 3

page 4


Neural Datalog Through Time: Informed Temporal Modeling via Logical Specification

Learning how to predict future events from patterns of past events is di...

The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process

Many events occur in the world. Some event types are stochastically exci...

Divided Differences, Falling Factorials, and Discrete Splines: Another Look at Trend Filtering and Related Problems

This paper serves as a postscript of sorts to Tibshirani (2014); Wang et...

Online Versus Offline NMT Quality: An In-depth Analysis on English-German and German-English

We conduct in this work an evaluation study comparing offline and online...

A Hybrid Neuro-Symbolic Approach for Complex Event Processing

Training a model to detect patterns of interrelated events that form sit...

How to Avoid Reidentification with Proper Anonymization

De Montjoye et al. claimed that most individuals can be reidentified fro...

Learning Set-equivariant Functions with SWARM Mappings

In this work we propose a new neural network architecture that efficient...

Code Repositories


Codebase for Attentive Neural Hawkes Process (A-NHP) and Attentive Neural Datalog Through Time (A-NDTT)

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.