State space models (SSMs) have high performance on long sequence modelin...
State space models (SSMs) have demonstrated state-of-the-art sequence
mo...
This survey presents a necessarily incomplete (and biased) overview of
r...
Linear time-invariant state space models (SSM) are a classical model fro...
Transformers are slow and memory-hungry on long sequences, since the tim...
In this work, we study the problem of computing a tuple's expected
multi...
Worst-case optimal join algorithms have so far been studied in two broad...
Overparameterized neural networks generalize well but are expensive to t...
Recent advances in efficient Transformers have exploited either the spar...
Recurrent neural networks (RNNs), temporal convolutions, and neural
diff...
Modern neural network architectures use structured linear transformation...
A central problem in learning from sequential data is representing cumul...
In this paper, we initiate a theoretical study of what we call the join
...
In this paper, we prove topology dependent bounds on the number of round...
In this paper we consider the following sparse recovery problem. We have...
Fast linear transforms are ubiquitous in machine learning, including the...
The low displacement rank (LDR) framework for structured matrices repres...
We revisit the classical problem of exact inference on probabilistic
gra...
Arı kan's exciting discovery of polar codes has provided an altogether n...