Improved SAT models for NFA learning

07/13/2021
by   Frédéric Lardeux, et al.
0

Grammatical inference is concerned with the study of algorithms for learning automata and grammars from words. We focus on learning Nondeterministic Finite Automaton of size k from samples of words. To this end, we formulate the problem as a SAT model. The generated SAT instances being enormous, we propose some model improvements, both in terms of the number of variables, the number of clauses, and clauses size. These improvements significantly reduce the instances, but at the cost of longer generation time. We thus try to balance instance size vs. generation and solving time. We also achieved some experimental comparisons and we analyzed our various model improvements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2021

GA and ILS for optimizing the size of NFA models

Grammatical inference consists in learning a formal grammar (as a set of...
research
03/16/2023

Taking advantage of a very simple property to efficiently infer NFAs

Grammatical inference consists in learning a formal grammar as a finite ...
research
06/27/2014

Set Constraint Model and Automated Encoding into SAT: Application to the Social Golfer Problem

On the one hand, Constraint Satisfaction Problems allow one to declarati...
research
07/31/2019

A Model of Random Industrial SAT

One of the most studied models of SAT is random SAT. In this model, inst...
research
07/20/2020

Phase Transition Behavior in Knowledge Compilation

The study of phase transition behaviour in SAT has led to deeper underst...
research
09/07/2020

Collaborative Management of Benchmark Instances and their Attributes

Experimental evaluation is an integral part in the design process of alg...
research
01/10/2014

Transformation-based Feature Computation for Algorithm Portfolios

Instance-specific algorithm configuration and algorithm portfolios have ...

Please sign up or login with your details

Forgot password? Click here to reset