On the Parameterized Complexity of Learning Monadic Second-Order Formulas

09/19/2023
by   Steffen van Bergerem, et al.
0

Within the model-theoretic framework for supervised learning introduced by Grohe and Turán (TOCS 2004), we study the parameterized complexity of learning concepts definable in monadic second-order logic (MSO). We show that the problem of learning a consistent MSO-formula is fixed-parameter tractable on structures of bounded tree-width and on graphs of bounded clique-width in the 1-dimensional case, that is, if the instances are single vertices (and not tuples of vertices). This generalizes previous results on strings and on trees. Moreover, in the agnostic PAC-learning setting, we show that the result also holds in higher dimensions. Finally, via a reduction to the MSO-model-checking problem, we show that learning a consistent MSO-formula is para-NP-hard on general structures.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro