Metric Hypertransformers are Universal Adapted Maps

01/31/2022
βˆ™
by   Beatrice Acciaio, et al.
βˆ™
0
βˆ™

We introduce a universal class of geometric deep learning models, called metric hypertransformers (MHTs), capable of approximating any adapted map F:𝒳^℀→𝒴^β„€ with approximable complexity, where π’³βŠ†β„^d and 𝒴 is any suitable metric space, and 𝒳^β„€ (resp. 𝒴^β„€) capture all discrete-time paths on 𝒳 (resp. 𝒴). Suitable spaces 𝒴 include various (adapted) Wasserstein spaces, all FrΓ©chet spaces admitting a Schauder basis, and a variety of Riemannian manifolds arising from information geometry. Even in the static case, where f:𝒳→𝒴 is a HΓΆlder map, our results provide the first (quantitative) universal approximation theorem compatible with any such 𝒳 and 𝒴. Our universal approximation theorems are quantitative, and they depend on the regularity of F, the choice of activation function, the metric entropy and diameter of 𝒳, and on the regularity of the compact set of paths whereon the approximation is performed. Our guiding examples originate from mathematical finance. Notably, the MHT models introduced here are able to approximate a broad range of stochastic processes' kernels, including solutions to SDEs, many processes with arbitrarily long memory, and functions mapping sequential data to sequences of forward rate curves.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset