
Strong Call by Value is Reasonable for Time
The invariance thesis of Slot and van Emde Boas states that all reasonab...
read it

Strong CallbyValue is Reasonable, Implosively
Whether the number of betasteps in the lambdacalculus can be taken as ...
read it

A certifying extraction with time bounds from Coq to callbyvalue λcalculus
We provide a plugin extracting Coq functions of simple polymorphic types...
read it

ConstantSpace, ConstantRandomness Verifiers with Arbitrarily Small Error
We study the capabilities of probabilistic finitestate machines that ac...
read it

The Space of Interaction (long version)
The space complexity of functional programs is not well understood. In p...
read it

Addressing Machines as models of lambdacalculus
Turing machines and register machines have been used for decades in theo...
read it

Maximal Spectral Efficiency of OFDM with Index Modulation under Polynomial Space Complexity
In this letter we demonstrate a mapper that enables all waveforms of OFD...
read it
The Weak CallByValue λCalculus is Reasonable for Both Time and Space
We study the weak callbyvalue λcalculus as a model for computational complexity theory and establish the natural measures for time and space  the number of betareductions and the size of the largest term in a computation  as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas [STOC 84]. More precisely, we show that, using those measures, Turing machines and the weak callbyvalue λcalculus can simulate each other within a polynomial overhead in time and a constant factor overhead in space for all computations that terminate in (encodings) of 'true' or 'false'. We consider this result as a solution to the longstanding open problem, explicitly posed by Accattoli [ENTCS 18], of whether the natural measures for time and space of the λcalculus are reasonable, at least in case of weak callbyvalue evaluation. Our proof relies on a hybrid of two simulation strategies of reductions in the weak callbyvalue λcalculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular, all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the overhead in time might be exponential. The second strategy is heapbased and relies on structure sharing, similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in time, but the space consumption might require an additional factor of n, which is essentially due to the size of the pointers required for this strategy. Our main contribution is the construction and verification of a spaceaware interleaving of the two strategies, which is shown to yield both a constant overhead in space and a polynomial overhead in time.
READ FULL TEXT
Comments
There are no comments yet.