# Knowledge Extraction and Knowledge Integration governed by Łukasiewicz Logics

The development of machine learning in particular and artificial intelligent in general has been strongly conditioned by the lack of an appropriate interface layer between deduction, abduction and induction. In this work we extend traditional algebraic specification methods in this direction. Here we assume that such interface for AI emerges from an adequate Neural-Symbolic integration. This integration is made for universe of discourse described on a Topos governed by a many-valued Łukasiewicz logic. Sentences are integrated in a symbolic knowledge base describing the problem domain, codified using a graphic-based language, wherein every logic connective is defined by a neuron in an artificial network. This allows the integration of first-order formulas into a network architecture as background knowledge, and simplifies symbolic rule extraction from trained networks. For the train of such neural networks we changed the Levenderg-Marquardt algorithm, restricting the knowledge dissemination in the network structure using soft crystallization. This procedure reduces neural network plasticity without drastically damaging the learning performance, allowing the emergence of symbolic patterns. This makes the descriptive power of produced neural networks similar to the descriptive power of Łukasiewicz logic language, reducing the information lost on translation between symbolic and connectionist structures. We tested this method on the extraction of knowledge from specified structures. For it, we present the notion of fuzzy state automata, and we use automata behaviour to infer its structure. We use this type of automata on the generation of models for relations specified as symbolic background knowledge.

## Authors

• 3 publications
• ### Symbolic Knowledge Extraction using Łukasiewicz Logics

This work describes a methodology that combines logic-based systems and ...
04/11/2016 ∙ by Carlos Leandro, et al. ∙ 0

• ### Reverse Engineering and Symbolic Knowledge Extraction on Łukasiewicz Fuzzy Logics using Linear Neural Networks

This work describes a methodology to combine logic-based systems and con...
04/11/2016 ∙ by Carlos Leandro, et al. ∙ 0

• ### Unsupervised Neural-Symbolic Integration

Symbolic has been long considered as a language of human intelligence wh...
06/06/2017 ∙ by Son N. Tran, et al. ∙ 0

• ### LYRICS: a General Interface Layer to Integrate AI and Deep Learning

In spite of the amazing results obtained by deep learning in many applic...
03/18/2019 ∙ by Giuseppe Marra, et al. ∙ 0

• ### Effective Integration of Symbolic and Connectionist Approaches through a Hybrid Representation

In this paper, we present our position for a neuralsymbolic integration ...
12/18/2019 ∙ by Marcio Moreno, et al. ∙ 0

• ### Propositional Knowledge Representation in Restricted Boltzmann Machines

Representing symbolic knowledge into a connectionist network is the key ...
05/31/2017 ∙ by Son N. Tran, et al. ∙ 0

• ### Dimensions of Neural-symbolic Integration - A Structured Survey

Research on integrated neural-symbolic systems has made significant prog...
11/10/2005 ∙ by Sebastian Bader, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Category Theory generalized the use of graphic language to specify structures and properties through diagrams. These categorical techniques provide powerful tools for formal specification, structuring, model construction, and formal verification for a wide range of systems, presented on a grate variety of papers. The data specification requires finite, effective and comprehensive presentation of complete structures, this type of methodology was explored on Category Theory for algebraic specification by Ehresmann[Ehresm68]. He developed sketches as a specification methodology of mathematical structures and presented it as an alternative to the string-based specification employed in mathematical logic. The functional semantic of sketches is sound in the informal sense that it preserves by definition the structure given in the sketch. Sketch specification enjoy a unique combination of rigour, expressiveness and comprehensibility. They can be used for data modelling, process modelling and meta-data modelling as well thus providing a unified specification framework for system modelling. For our goal we extend the syntax of sketch to multi-graphs and define its models on the Topos (see e.g. for definition [Johnstone02]), defined by relation evaluated in a many-valued logic. We named specification system too our version of Ehresmanns sketch, and on its definition we developed a conservative extension to the notions of commutative diagram, limit and colimit for many-valued logic.

In this work, we use background knowledge about a problem to specify its domain structures. This type of information is assumed to be vague or uncertain, and described using multi-diagrams. We simplify the exposition and presentation of this notions using a string-based codification, for this type of multi-diagrams, named relational specification. We use this description for presenting structures extracted from data and on its integration.

There are essentially two representation paradigms to represent the extracted information, usually taken very differently. On one hand, symbolic-based descriptions are specified through a grammar that has fairly clear semantics. On the other hand, the usual way to see information presented using a connectionist description is its codification on a neural network (NN). Artificial NNs, in principle, combine the ability to learn and robustness or insensitivity to perturbations of input data. NNs are usually taken as black boxes, thereby providing little insight into how the information is codified. It is natural to seek a synergy integrating the white-box character of symbolic base representation and the learning power of artificial neuronal networks. Such neuro-symbolic models are currently a very active area of research. In the context of classic logic see [Bornscheuer98] [Hitzler04] [Holldobler00]

, for the extraction of logic programs from trained networks. For the extraction of modal and temporal logic programs see

[Avila07] and [Avila08]. In [Komendantskaya07] we can find processes to generate connectionist representation of multi-valued logic programs and for Łukasiewicz logic programs (ŁL) [Klawonn92].

Our approach to the generation of neuro-symbolic models uses Łukasiewicz logic. This type of many-valued logic has a very useful property motivated by the ”linearity” of logic connectives. Every logic connective can be defined by a neuron in an artificial network having, by activation function, the identity truncated to zero and one

[Castro98]. This allows the direct codification of formulas into network architecture, and simplifies the extraction of rules. Multilayer feed-forward NN, having this type of activation function, can be trained efficiently using the Levenderg-Marquardt (LM) algorithm [HaganMenhaj99], and the generated network can be simplified using the ”Optimal Brain Surgeon” algorithm proposed by B. Hassibi, D. G. Stork and G.J. Stork [Hassibi93].

We combine specification system and the injection of information extracted, on the specification, in the context of structures generated using a fuzzy automata. This type of automata are presented as simple process to generate uncertain structures. They are used to describe an example: where the generated data is stored in a specified structure and where we apply the extraction methodology, using different views of the data, to find new insights about the data. This symbolic knowledge is inject in the specification improving the available description about the data. In this sense we see the specification system as a knowledge base about the problem domain .

## 2 Preliminaries

In this section, we present some concepts that will be used throughout the paper.

### 2.1 Łukasiewicz logics

Classical propositional logic is one of the earliest formal systems of logic. The algebraic semantics of this logic are given by Boolean algebra. Both, the logic and the algebraic semantics have been generalized in many directions [Jipsen03]. The generalization of Boolean algebra can be based in the relationship between conjunction and implication given by These equivalences, called residuation equivalences, imply the properties of logic operators in Boolean algebras.

In applications of fuzzy logic, the properties of Boolean conjunction are too rigid, hence it is extended a new binary connective, , which is usually called fusion, and the residuation equivalence defines implication.

These two operators induce a structure of residuated poset on a partially ordered set of truth values [Jipsen03]. This structure has been used in the definition of many types of logics. If has more than two values, the associated logics are called a many-valued logics.

We focused our attention on many-valued logics having a subset of interval as set of truth values. In this type of logics the fusion operator is known as a t-norm. In [Gerla00], it is described as a binary operator defined in commutative and associative, non-decreasing in both arguments, and .

An example of a continuous -norms is , named Łukasiewicz -norm, used on definition of Łukasiewicz logic (ŁL)[Hajek95].

Sentences in ŁL are, as usually, built from a (countable) set of propositional variables, a conjunction (the fusion operator), an implication , and the truth constant 0. Further connectives are defined as: The interpretation for a well-formed formula in Łlogic is defined inductively, as usual, assigning a truth value to each propositional variable.

The Łukasiewicz fusion operator , its residue , and the lattice operators and , defined in a structure of resituated lattice [Jipsen03] since:

1. is a commutative monoid

2. is a bounded lattice, and

3. the residuation property holds,

 for all x,y,z∈Ω,x≤y⇒z iff x⊗y≤z.

This structure is divisible, , and . Structures with this characteristics are usually called MV-algebras [Hajek98].

However truth table is a continuous structure, for our computational goal, it must be discretized, ensuring sufficient information to describe the original formula. A truth table for a formula , in ŁL, is a map , where is the number of propositional variables used in . For each integer , let be the set . Each , defines a sub-table for defined by , given by , and called the (n+1)-valued truth sub-table. Since is closed for the logic connectives defined in ŁL, we define a (n+1)-valued Łukasiewicz logic (-ŁL), as the fragment of ŁL having by truth values . On the following we generic call them ”a ŁL”.

Fuzzy logics, like ŁL, deals with degree of truth and its logic connectives are functional, whereas probability theory (or any probabilistic logic) deals with degrees of degrees of uncertainty and its connectives aren’t functional. If we take two sentence from

the language of ŁL, and , for any probability defined in we have if is a boolean tautology, however for a valuation on we have . The divisibility in , is usually taken as a fuzzy modus ponens of ŁL, , where . This inference is known to preserve lower formals of probability, and then . Petr Hájek presented in [Hajek952] extends this principle by embedding probabilistic logic in ŁL, for this we associated to each boolean formula a fuzzy proposition ” is provable”. This is a new propositional variable on ŁL, where is now taken to be its degree of truth.

We assume in our work what the involved entities or concepts on a UoD can be described, or characterize, through fuzzy relations and the information associated to them can be presented or approximated using sentences on ŁL. In next section we describe this type of relations in the context of allegory theory [Freyd90].

### 2.2 Relations

A vague relation defined between a family of sets , and evaluated on , is a map . Here we assume that is the set of truth values for a ŁL. In this case we named the set of attributes, where each index , is called an attribute and the set indexed by , , represents the set of possible values for that attribute on the relation or its domain. In relation every instance , have associated a level of uncertainty, given by , and interpreted as the truth value of proposition , in .

Every partition , where the sets of attributes , and are disjoint, define a relation

 G:∏i∈AttiAi×∏i∈AttoAi→Ω,

by

 G(¯x,¯z)=⨁¯y∈∏i∈AttaAiR(¯x,¯y,¯z),

and denoted by , this type of relation we call a view for . For each partition define a view for , where and are called, respectively, the set of inputs and the set of its outputs. This sets are denoted, on the following, by and . Graphically a view

 G:A0×A1×A2⇀A3×A4×A5,

can be presented by multi-arrow on figure LABEL:graph1.