Certified lattice reduction

05/28/2019
by   Thomas Espitau, et al.
0

Quadratic form reduction and lattice reduction are fundamental tools in computational number theory and in computer science, especially in cryptography. The celebrated Lenstra-Lenstra-Lovász reduction algorithm (so-called LLL) has been improved in many ways through the past decades and remains one of the central methods used for reducing integral lattice basis. In particular, its floating-point variants-where the rational arithmetic required by Gram-Schmidt orthogonalization is replaced by floating-point arithmetic-are now the fastest known. However, the systematic study of the reduction theory of real quadratic forms or, more generally, of real lattices is not widely represented in the literature. When the problem arises, the lattice is usually replaced by an integral approximation of (a multiple of) the original lattice, which is then reduced. While practically useful and proven in some special cases, this method doesn't offer any guarantee of success in general. In this work, we present an adaptive-precision version of a generalized LLL algorithm that covers this case in all generality. In particular, we replace floating-point arithmetic by Interval Arithmetic to certify the behavior of the algorithm. We conclude by giving a typical application of the result in algebraic number theory for the reduction of ideal lattices in number fields.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/13/2018

Eliminating Unstable Tests in Floating-Point Programs

Round-off errors arising from the difference between real numbers and th...
research
07/28/2020

Tanh-sinh quadrature for single and multiple integration using floating-point arithmetic

The problem of estimating single- and multi-dimensional integrals, with ...
research
02/24/2020

On the use of the Infinity Computer architecture to set up a dynamic precision floating-point arithmetic

We devise a variable precision floating-point arithmetic by exploiting t...
research
06/09/2022

AritPIM: High-Throughput In-Memory Arithmetic

Digital processing-in-memory (PIM) architectures are rapidly emerging to...
research
12/04/2017

An 826 MOPS, 210 uW/MHz Unum ALU in 65 nm

To overcome the limitations of conventional floating-point number format...
research
12/06/2021

Approximate Translation from Floating-Point to Real-Interval Arithmetic

Floating-point arithmetic (FPA) is a mechanical representation of real a...
research
05/17/2023

The Complexity of Diagonalization

We survey recent progress on efficient algorithms for approximately diag...

Please sign up or login with your details

Forgot password? Click here to reset