Abstract Transducers
Several abstract machines that operate on symbolic input alphabets have been proposed in the last decade, for example, symbolic automata or lattice automata. Applications of these types of automata include software security analysis and natural language processing. While these models provide means to describe words over infinite input alphabets, there is no considerable work on symbolic output (as present in transducers) alphabets, or even abstraction (widening) thereof. Furthermore, established approaches for transforming, for example, minimizing or reducing, finite-state machines that produce output on states or transitions are not applicable. A notion of equivalence of this type of machines is needed to make statements about whether or not transformations maintain the semantics. We present abstract transducers as a new form of finite-state transducers. Both their input alphabet and the output alphabet is composed of abstract words, where one abstract word represents a set of concrete words. The mapping between these representations is described by abstract word domains. By using words instead of single letters, abstract transducers provide the possibility of lookaheads to decide on state transitions to conduct. Since both the input symbol and the output symbol on each transition is an abstract entity, abstraction techniques can be applied naturally. We apply abstract transducers as the foundation for sharing task artifacts for reuse in context of program analysis and verification, and describe task artifacts as abstract words. A task artifact is any entity that contributes to an analysis task and its solution, for example, candidate invariants or source code to weave.
READ FULL TEXT