Connecting and Comparing Language Model Interpolation Techniques

08/26/2019
by   Ernest Pusateri, et al.
0

In this work, we uncover a theoretical connection between two language model interpolation techniques, count merging and Bayesian interpolation. We compare these techniques as well as linear interpolation in three scenarios with abundant training data per component model. Consistent with prior work, we show that both count merging and Bayesian interpolation outperform linear interpolation. We include the first (to our knowledge) published comparison of count merging and Bayesian interpolation, showing that the two techniques perform similarly. Finally, we argue that other considerations will make Bayesian interpolation the preferred approach in most circumstances.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/17/2023

Craig Interpolation for Guarded Fragments

We show that the guarded-negation fragment (GNFO) is, in a precise sense...
research
04/15/2019

The Numerical Stability of Regularized Barycentric Interpolation Formulae for Interpolation and Extrapolation

The ℓ_2- and ℓ_1-regularized modified Lagrange interpolation formulae ov...
research
09/22/2021

Numerical Continued Fraction Interpolation

We show that highly accurate approximations can often be obtained from c...
research
05/06/2019

Sparse data interpolation using the geodesic distance affinity space

In this paper, we adapt the geodesic distance-based recursive filter to ...
research
06/24/2018

Golden interpolation

For the classic aesthetic interpolation problem, we propose a novel thou...
research
10/18/2021

Learning in High Dimension Always Amounts to Extrapolation

The notion of interpolation and extrapolation is fundamental in various ...
research
02/04/2023

Interpolation for Robust Learning: Data Augmentation on Geodesics

We propose to study and promote the robustness of a model as per its per...

Please sign up or login with your details

Forgot password? Click here to reset