On a plausible concept-wise multipreference semantics and its relations with self-organising maps

08/30/2020 ∙ by Laura Giordano, et al. ∙ 0

Inthispaperwedescribeaconcept-wisemulti-preferencesemantics for description logic which has its root in the preferential approach for modeling defeasible reasoning in knowledge representation. We argue that this proposal, beside satisfying some desired properties, such as KLM postulates, and avoiding the drowning problem, also defines a plausible notion of semantics. We motivate the plausibility of the concept-wise multi-preference semantics by developing a logical semantics of self-organising maps, which have been proposed as possible candidates to explain the psychological mechanisms underlying category generalisation, in terms of multi-preference interpretations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Conditional logics have have their roots in philosophical logic. They have been studied first by Lewis [21, 23] to formalize hypothetical and counterfactual reasoning (if were the case then

) that cannot be captured by classical logic with its material implication. From the 80’s they have been considered in computer science and artificial intelligence and they have provided an axiomatic foundation of non-monotonic and common sense reasoning

[8, 19]. In particular, preferential approaches [19, 20] to common sense reasoning have been more recently extended to description logics, to deal with inheritance with exceptions in ontologies, allowing for non-strict forms of inclusions, called typicality or defeasible inclusions (namely, conditionals), with different preferential semantics [10, 4] and closure constructions [6, 5, 13, 24].

In this paper we consider a “concept-aware” multipreference semantics [15] that has been recently introduced for a lightweight description logic of the family, which takes into account preferences with respect to different concepts, and integrates them into a preferential semantics. To support the plausibility of this semantics we show that it can be can used to provide a logical semantics of self-organising maps [18]

. Self-organising maps (SOMs) have been proposed as possible candidates to explain the psychological mechanisms underlying category generalisation. They are psychologically and biologically plausible neural network models that can learn after limited exposure to positive category examples, without any need of contrastive information.

We show that the process of category generalization in self-organising maps produces, as a result, a multipreference model in which a preference relation is associated to each concept (each learned category) and the combination of the preferences into a global one, following the approach in [15], defines a standard KLM preferential model. The model can be used to learn or validate conditional knowledge from the empirical data used in the category generalization process, and the evaluation of conditionals can be done by model checking, using the information recorded in the SOM.

Based on the assumption that the abstraction process in the SOM is able to identify the most typical exemplars for a given category, in the semantic representation of a category, we will identify some specific exemplars (namely, the best matching units of the category) as the typical exemplars of the category, thus defining a preference relation among the instances of a category.

The category generalization process can then be regarded as a model building process and, in a way, as a belief revision process. Indeed, initially we have no belief about which is the category of any exemplar. During training, the current state of the SOM corresponds to a model representing the beliefs about the input exemplars considered so far (concerning their category). Each time a new input exemplar is considered, this model is revised adding the exemplar into the proper category.

2 Preliminary: the description logic

We consider the description logic of the family [1]. Let be a set of concept names, a set of role names and a set of individual names. The set of concepts can be defined as follows: , where , and . Observe that union, complement and universal restriction are not constructs. A knowledge base (KB) is a pair , where is a TBox and is an ABox. The TBox is a set of concept inclusions (or subsumptions) of the form , where are concepts. The ABox is a set of assertions of the form and where is a concept, , and .

An interpretation for is a pair where: is a non-empty domain—a set whose elements are denoted by —and is an extension function that maps each concept name to a set , each role name to a binary relation , and each individual name to an element . It is extended to complex concepts as follows: , , and

The notions of satisfiability of a KB in an interpretation and of entailment are defined as usual:

Definition 1 (Satisfiability and entailment)

Given an interpretation :

- satisfies an inclusion if ;

- satisfies an assertion if and an assertion if .

Given a KB ,an interpretation satisfies (resp. ) if satisfies all inclusions in (resp. all assertions in ); is a model of if satisfies and .

A subsumption (resp., an assertion , ), is entailed by , written , if for all models of , satisfies .

3 A concept-wise multi-preference semantics

In this section we describe an extension of with typicality inclusions, defined along the lines of the extension of description logics with typicality [10, 12], but we exploit a different multi-preference semantics [15]. In addition to standard inclusions (called strict inclusions in the following), the TBox will also contain typicality inclusions of the form , where and are concepts. A typicality inclusion means that “typical C’s are D’s” or “normally C’s are D’s” and corresponds to a conditional implication in Kraus, Lehmann and Magidor’s (KLM) preferential approach [19, 20]. Such inclusions are defeasible, i.e., admit exceptions, while strict inclusions must be satisfied by all domain elements.

Let be a set of distinguished concepts. For each concept , we introduce a modular preference relation which describes the preference among domain elements with respect to . Each preference relation has the same properties of preference relations in KLM-style ranked interpretations [20], is a modular and well-founded partial order, i.e., irreflexive and transitive relation, where: is well-founded if, for all , if , then ; and is modular if, for all , if then or ).

Definition 2 (Multipreference interpretation)

A multipreference interpretation is a tuple , where:

  • is a non-empty domain;

  • is an irreflexive, transitive, well-founded and modular relation over ;

  • is an interpretation function, as in an interpretation (see Section 2).

Observe that, given a multipreference interpretation, an interpretation can be associated to each concept , which is a ranked interpretation as those considered for plus typicality in [14]. The preference relation allows the set of prototypical -elements to be defined as the -elements which are minimal with respect to , i.e., . As a consequence, the multipreference interpretation above is able to single out the typical -elements, for all distinguished concepts .

The multipreference structures above are at the basis of the semantics for ranked knowledge bases [15], which have been inspired to Brewka’s framework of basic preference descriptions [3]. A ranked TBox is allowed for each concept , and contains all the defeasible inclusions, , specifying the typical properties of -elements. Ranks (non-negative integers) are assigned to such inclusions; the ones with higher ranks are considered to be more important than the ones with lower ranks.

Consider, for instance, the ranked knowledge base , over the set of distinguished concepts , with empty ABox, and with the set of strict inclusions:

                           

        ;

the ranked TBox contains the defeasible inclusions:

;

the ranked TBox contains the defeasible inclusions:

and the ranked TBox contains the inclusions:

Exploiting the fact that for an knowledge base we can restrict our consideration to finite domains [1], and considering canonical models which are large enough to contain a domain element for each possible consistent concept occurring in (and its complement), the ranked knowledge base above gives rise to canonical models, where the three preference relations , , and represent the preference among the elements of the domain according to concepts , , and , respectively.

While we refer to [15] for the construction of the preference relations ’s from a ranked knowledge base , in the following we will recall the notion of concept-wise multi-preference interpretation which can be obtained by combining the preference relations into a global preference relation . This is needed for reasoning about the typicality of arbitrary concepts , which do not belong to the set of distinguished concepts . For instance, we may want to verify whether typical employed students are young, or whether they have a boss. To answer these questions both preference relations and are relevant, and they might be conflicting for some pairs of domain elements as, for instance, tom is more typical than bob as a student (), but more exceptional as an employee ( ).

To define a global preference relation, we take into account the specificity relation among concepts, such as, for instance, the fact that a concept like is more specific than concept . The idea is that, in case of conflicts, the properties of a more specific class (such as that PhD students normally have a scholarship) should override the properties of less specific class (such as that students normally do not have a scholarship).

Definition 3 (Specificity)

A specificity relation among concepts in is a binary relation which is irreflexive and transitive.

For , means that is more specific than . The simplest notion of specificity among concepts with respect to a knowledge base is based on the subsumption hierarchy: if and . This is one of the notions of specificity considered for [2]. Another one is based on the ranking of concepts in the rational closure of .

Let us recall the notion of concept-wise multipreference interpretation [15].

Definition 4 (concept-wise multipreference interpretation)

A concept-wise multipreference interpretation (or cw-interpretation) is a tuple such that:

  • is a non-empty domain;

  • for each , is an irreflexive, transitive, well-founded and modular relation over ;

  • is a (global) preference relation over defined from as follows:

  • is an interpretation function, as defined for interpretations (see Section 2), with the addition that, for typicality concepts, we let:

    where and s.t. .

Relation is defined from based on a modified Pareto condition: holds if there is at least a such that and, for all , either holds or, in case it does not, there is some more specific than such that (preference in this case overrides ). The idea is that, for two PhD students (who are also students) Bob and Mary, if and , we will have , that is, Bob is regarded as being globally more typical than Mary as he satisfies more properties of typical PhD students wrt Mary although Mary may satisfy additional properties of typical students wrt Bob.

It has been proven [15] that, given a cw-interpretation , the relation is an irreflexive, transitive and well-founded relation. Hence, the triple is a KLM-style preferential interpretation, as those introduced for with typicality [11] (and it is not necessarily a modular interpretation). A cw-model of a ranked knowledge base is then defined as a specific preferential interpretation which builds over the preference relations , constructed from the ranked TBoxes , and satisfies all strict inclusions and assertions in . The notion of cw-entailment, defined in the obvious way, satisfies the KLM postulates of a preferential consequence relation, and does not suffer from the drowning problem. In the next section we motivate the plausibility of this concept-wise multipreference semantics showing that it is well suited to provide a semantic characterization of self-organising maps [18].

4 Self-organising maps

Self-organising maps (SOMs, introduced by Kohonen [18]) are particularly plausible neural network models that learn in a human-like manner. In particular: SOMs learn to organize stimuli into categories in an unsupervised way, without the need of a teacher providing a feedback; can learn with just a few positive stimuli, without the need for negative examples or contrastive information; reflect basic constraints of a plausible brain implementation in different areas of the cortex [22], and are therefore biologically plausible models of category formation; have proven to be capable of explaining experimental results.

In this section we shortly describe the architecture of SOMs and report Gliozzi and Plunkett’ similarity-based account of category generalization based on SOMs [16]. Roughly speaking, in [16] the authors judge a new stimulus as belonging to a category by comparing the distance of the stimulus from the category representation to the precision of the category representation.

SOMs consist of a set of neurons, or units, spatially organized in a grid

[18].

Figure 1: An example of SOM. The set of rectangles stands for the input presented to the SOM (in the example the input is three-dimensional). This is presented to all neurons of the SOM (these are the neurons-dots-in the upper grid) in order to find the .

Each map unit

is associated with a weight vector

of the same dimensionality as the input vectors. At the beginning of training, all weight vectors are initialized to random values, outside the range of values of the input stimuli. During training, the input elements are sequentially presented to all neurons of the map. After each presentation of an input , the best-matching unit (BMU) is selected: this is the unit whose weight vector is closest to the stimulus (i.e. ).

The weights of the best matching unit and of its surrounding units are updated in order to maximize the chances that the same unit (or its surrounding units) will be selected as the best matching unit for the same stimulus or for similar stimuli on subsequent presentations. In particular, it reduces the distance between the best matching unit’s weights (and its surrounding neurons’ weights) and the incoming input. Furthermore, it organizes the map topologically so that the weights of close-by neurons are updated in a similar direction, and come to react to similar inputs. We refer to [18] for the details.

The learning process is incremental: after the presentation of each input, the map’s representation of the input (and in particular the representation of its best-matching unit) is updated in order to take into account the new incoming stimulus. At the end of the whole process, the SOM has learned to organize the stimuli in a topologically significant way: similar inputs (with respect to Euclidean distance) are mapped to close by areas in the map, whereas inputs which are far apart from each other are mapped to distant areas of the map.

Once the SOM has learned to categorize, to assess category generalization, Gliozzi and Plunkett [16] define the map’s disposition to consider a new stimulus as a member of a known category as a function of the distance of from the map’s representation of . They take a minimalist notion of what is the map’s category representation: this is the ensemble of best-matching units corresponding to the known instances of the category. They use to refer to the map’s representation of category and define category generalization as depending on two elements:

  • the distance of the new stimulus with respect to the category representation

  • compared to the maximal distance from that representation of all known instances of the category

This captured by the following notion of relative distance (rd for short) [16] :

(1)

where is the (minimal) Euclidean distance between and ’s category representation, and expresses the precision of category representation, and is the (maximal) Euclidean distance between any known member of the category and the category representation.

With this definition, a given Euclidean distance from to category representation will give rise to a higher relative distance rd if the maximal distance between C and its known examples is low (and category representation is precise) than if it is high (and category representation is coarse). As a function of the relative distance above, Gliozzi and Plunkett then define the map’s Generalization Degree of category membership to a new stimulus .

It was observed that the above notion of relative distance (Equation 1) requires there to be a memory of some of the known instances of the category being used (this is needed to calculate the denominator in the equation). This gives rise to a sort of hybrid model in which category representation and some exemplars coexist. An alternative way of formulating the same notion of relative distance would be to calculate online the distance between known category instance currently examined and the representation of the category being formed.

By judging a new stimulus as belonging to a category by comparing the distance of the stimulus from the category representation to the precision of the category representation, Gliozzi and Plunkett demonstrate [16] that the Numerosity and Variability effects of category generalization, described by Griffiths and Tenenbaum [25], and usually explained with Bayesian tools, can be accommodated within a simple and psychologically plausible similarity-based account, which contrasts what was previously maintained. In the next section, we show that their notion of relative distance can also be used as a basis for a logical semantics for SOMs.

5 Relating self-organising Maps and multi-preference models

We aim at showing that, once the SOM has learned to categorize, we can regard the result of the categorization as a multipreference interpretation. Let be the set of input stimuli from different categories, , which have been considered during the learning process.

For each category , we let be the ensemble of best-matching units corresponding to the input stimuli of category , i.e., . We regard the learned categories as being the concept names (atomic concepts) in the description logic and we let them constitute our set of distinguished concepts .

To construct a multi-preference interpretation we proceed as follows: first, we fix the domain to be the space of all possible stimuli; then, for each category (concept) , we define a preference relation , exploiting the notion of relative distance of a stimulus from the map’s representation of . Finally, we define the interpretation of concepts.

Let be the set of all the possible stimuli, including all input stimuli () as well as the best matching units of input stimuli (i.e., ). For simplicity, we will assume that the space of input stimuli is finite.

Once the SOM has learned to categorize, the notion of relative distance of a stimulus from a category introduced above can be used to build a binary preference relation among the stimuli in w.r.t. category as follows: for all ,

(2)

Each preference relation is a strict partial order relation on . The relation is also well-founded as we have assumed to be finite.

We exploit this notion of preference to define a concept-wise multipreference interpretation associated with the SOM, that we call a cw-model of the SOM. We restrict the DL language to the fragment of (plus typicality) not admitting roles, as in the self-organising map we do not have a representation of role names.

Definition 5 (multipreference-model of a SOM)

The multipreference-model of the SOM is a multipreference interpretation such that:

  • is the set of all the possible stimuli, as introduced above;

  • for each , is the preference relation defined by equivalence (2).

  • the interpretation function is defined for concept names (i.e. categories) as follows:

    where is the maximal relative distance of an input stimulus from category , that is, . The interpretation function is extended to complex concepts according to Definition 2.

Informally, we interpret as -elements those stimuli whose relative distance from category is not larger than the relative distance of any input exemplar belonging to category . Given , we can identify the most typical -elements wrt as the -elements whose relative distance from category is minimal, i.e., the elements in . Observe that the best matching unit of an input stimulus is an element of . Hence, for , the relative distance of from category , , is , as . Therefore, and .

5.1 Evaluation of concept inclusions by model checking

We have defined a multipreference interpretation where, in the domain of the possible stimuli, we are able to identify, for each category , the -elements as well as the most typical -elements wrt . We can exploit to verify which inclusions are satisfied by the SOM by model checking, i.e., by checking the satisfiability of inclusions over model . This can be done both for strict concept inclusions of the form and for defeasible inclusions of the form , where and are concept names (i.e., categories).

For the verification that a typicality inclusion is satisfied in we have to check that the most typical elements wrt are elements, that is . Note that, besides the elements in , may contain other elements of having relative distance from . As we do not know, for all the possible input stimuli in , whether they belong to or to , as an approximation, we only check that all elements in are elements, that is:

(3)

Let the relative distance of from be defined as

the maximal relative distance of from . Then we can rewrite condition (3) simply as

Observe that the relative distance also gives a measure of plausibility of the defeasible inclusion : the lower is the relative distance of from , the more plausible is the defeasible inclusion .

Verifying that a strict inclusion is satisfied, requires to check that is included in . Exploiting the fact that the map is organized topologically, and using the relative distance of from , we verify that the relative distance of from plus the maximal relative distance of a -element from is not greater than the maximal relative distance of a -element from :

(4)

where . That is, the -element most distant from is nearer to than the most distant -element.

Computing conditions (3) and (4) on the SOM, may be non trivial, depending on the number of input stimuli that have been considered in the learning phase (the size of the set of input exemplars). However, from a logical point of view, this is just model checking. Gliozzi and Plunkett have considered self-organising maps that are able to learn from a limited number of input stimuli, although this is not generally true for all self-organising maps [16].

5.2 Combining preferences into a preferential interpretation

The multipreference interpretation introduces in Definition 5 allows to determine the set of -elements for all learned categories and to define the most typical -elements, exploiting the preference relation . However, we are not able to define the most typical -elements just using a single preference. Starting from , we construct a concept-wise multipreference interpretation that combines the preferential relations in into a global preference relation , and provides an intepretation to all typicality concepts such as, for instance, . The interpretation is constructed from according to Definition 4.

The construction exploits a notion of specificity. Observe that the specificity relation between two concepts and can be determined based on the single model of the SOM. if is satisfied in and is not satisfied in .

Definition 6 (cw-model of a SOM)

The cw-model of a SOM is a cw-interpretation , such that the tuple is a multipreference model of the SOM according to Definition 5, and is the global preference relation defined from according as in Definition 4, point (c).

In particular, in , as in all cw-interpretations (see Definition 4), the interpretation of typicality concepts is defined based on the global preference relation as , for all concepts . Here, we are considering concepts in the fragment of language without roles, which are built from the concept names (the learned categories). The model can be considered a sort of (unique) canonical model for the SOM, representing what holds in that state of the SOM (e.g., after the learning phase). The logical inclusions that “follow from the SOM” are therefore the inclusions that hold in the single model (the situation is similar to the case of Horn clauses, where there is a unique minimal canonical model describing all the (atomic) logical consequences of the knowledge base).

As is a cw-interpretation, the result that the triple is a preferential interpretation as in KLM approach [19, 20] holds for , and tells us that the model provides a logical semantics for the SOM which is well-defined, as is a preferential consequence relation, and therefore satisfies all KLM properties of a preferential consequence relations.

The verification of arbitrary defeasible inclusions on can, in principle, be done by model checking, but might require to consider all the possibly many input stimuli, i.e., all domain elements in , which may be unfeasible in practice. As an alternative, the identification of the set of strict and defeasible inclusions satisfied by the SOM over the learned categories (as done in Section 5.1), allows to define an knowledge base and to reason on it symbolically, using for instance an approach similar to the one described in Section 3 for ranked knowledge bases. In particular, Answer Set Programming (in particular, asprin) has been used to achieve defeasible reasoning under the multipreference approach for the lightweight description logic [15] . Ranked knowledge bases have been considered, where defeasible inclusions are given a rank, that provides a measure of plausibility of the defeasible inclusion, and multipreference entailment is reformulated as a problem of computing preferred answer sets. As we have seen, a measure of plausibility can as well be assigned to the defeasible inclusions satisfied by the SOM.

5.3 Category generalization process as iterated belief revision

We have seen that one can give an interpretation of a self-organising map after the learning phase, as a preferential model. However, the state of the SOM during the learning phase can as well be represented as a multipreference model (precisely in the same way). During training, the current state of the SOM corresponds to a model representing the beliefs about the input stimuli considered so far (beliefs concerning the category of the stimuli).

The category generalization process can then be regarded as a model building process and, in a way, as a belief revision process. Initially we do not know the category of the stimuli in the domain . In the initial model, call it (over the domain ) the interpretation of each concept is empty. is the model of a knowledge base containing a strict inclusion , for all .

Each time a new input stimulus () is considered, the model is revised adding the stimulus (and its best matching unit ) into the proper category (). Not only the category interpretation is revised by the addition of and in (so that does not hold any more), but also the associated preference relation is revised as the addition of modifies the set of best matching units for category , as well as the relative distance of a stimulus from . That is, a revision step may change the set of conditionals which are satisfied by the model.

If learning phase converges to a solution, the final state of the SOM is captured by the model obtained by a sequence of revision steps which, starting from , gives rise to a sequence of models ,, (with ). At each step the knowledge base is not represented explicitly, but the model of the knowledge base at step is used to determine the model at step as a result of revision (). The knowledge base (the set of all the strict and defeasible inclusions satisfied in , can then be regarded as the knowledge base obtained from through a sequence of revision steps, i.e., . In fact, from any state of the SOM we can construct a corresponding model, which determines a knowledge base, the set of (strict and defeasible) inclusions satisfied in that model. It would be interesting to study of the properties of this notion of revision and compare with the notions of iterated belief revision studied in the literature [7, 9, 17].

6 Conclusions

We have explored the relationships between a concept-wise multipreference semantics and self-organising maps. On the one hand, we have seen that self-organising maps can be given a logical semantics in terms of KLM-style preferential interpretations; the model can be used to learn or to validate conditional knowledge from the empirical data used in the category generalization process based on model checking; the learning process in the self-organising map can be regarded as an iterated belief revision process. On the other hand, the plausibility of concept-wise multipreference semantics is supported by that fact that self-organising maps are considered as psychologically and biologically plausible neural network models.

References

  • [1] F. Baader, S. Brandt, and C. Lutz. Pushing the envelope. In L.P. Kaelbling and A. Saffiotti, editors, Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI 2005), pages 364–369, Edinburgh, Scotland, UK, August 2005. Professional Book Center.
  • [2] P. A. Bonatti, M. Faella, I. Petrova, and L. Sauro. A new semantics for overriding in description logics. Artif. Intell., 222:1–48, 2015.
  • [3] Gerhard Brewka. A rank based description language for qualitative preferences. In Proceedings of the 16th Eureopean Conference on Artificial Intelligence, ECAI’2004, Valencia, Spain, August 22-27, 2004, pages 303–307, 2004.
  • [4] Katarina Britz, Johannes Heidema, and Thomas Meyer. Semantic preferential subsumption. In G. Brewka and J. Lang, editors, Principles of Knowledge Representation and Reasoning: Proceedings of the 11th International Conference (KR 2008), pages 476–484, Sidney, Australia, September 2008. AAAI Press.
  • [5] G. Casini, T. Meyer, I. J. Varzinczak, , and K. Moodley. Nonmonotonic Reasoning in Description Logics: Rational Closure for the ABox. In 26th International Workshop on Description Logics (DL 2013), volume 1014 of CEUR Workshop Proceedings, pages 600–615, 2013.
  • [6] G. Casini and U. Straccia. Rational Closure for Defeasible Description Logics. In T. Janhunen and I. Niemelä, editors, Proc. 12th European Conf. on Logics in Artificial Intelligence (JELIA 2010), volume 6341 of LNCS, pages 77–90, Helsinki, Finland, September 2010. Springer.
  • [7] A. Darwiche and J. Pearl. On the logic of iterated belief revision. Artificial Intelligence, 89:1–29, 1997.
  • [8] J. P. Delgrande. A first-order conditional logic for prototypical properties. Artificial Intelligence, 33(1):105–130, 1987.
  • [9] L. Giordano, V. Gliozzi, and N. Olivetti. Iterated Belief Revision and Conditional Logic. Studia Logica, 70:23–47, 2002.
  • [10] L. Giordano, V. Gliozzi, N. Olivetti, and G. L. Pozzato. Preferential Description Logics. In Nachum Dershowitz and Andrei Voronkov, editors, Proceedings of LPAR 2007 (14th Conference on Logic for Programming, Artificial Intelligence, and Reasoning), volume 4790 of LNAI, pages 257–272, Yerevan, Armenia, October 2007. Springer-Verlag.
  • [11] L. Giordano, V. Gliozzi, N. Olivetti, and G. L. Pozzato. Reasoning about typicality in low complexity DLs: the logics and . In Proc. 22nd Int. Joint Conf. on Artificial Intelligence (IJCAI 2011), pages 894–899, Barcelona, July 2011. Morgan Kaufmann.
  • [12] L. Giordano, V. Gliozzi, N. Olivetti, and G. L. Pozzato. Semantic characterization of rational closure: From propositional logic to description logics. Artificial Intelligence, 226:1–33, 2015.
  • [13] L. Giordano, V. Gliozzi, N. Olivetti, and G.L. Pozzato. Minimal Model Semantics and Rational Closure in Description Logics . In 26th International Workshop on Description Logics (DL 2013), volume 1014, pages 168 – 180, 7 2013.
  • [14] L. Giordano and D. Theseider Dupré. ASP for minimal entailment in a rational extension of SROEL. TPLP, 16(5-6):738–754, 2016. DOI: 10.1017/S1471068416000399.
  • [15] L. Giordano and D. Theseider Dupré. An ASP approach for reasoning in a concept-aware multipreferential lightweight DL. CoRR, abs/2006.04387, 2020.
  • [16] V. Gliozzi and K. Plunkett. Grounding bayesian accounts of numerosity and variability effects in a similarity-based framework: the case of self-organising maps. Journal of Cognitive Psychology, 31(5–6), 2019.
  • [17] Gabriele Kern-Isberner. A thorough axiomatization of a principle of conditional preservation in belief revision. Ann. Math. Artif. Intell., 40(1-2):127–164, 2004.
  • [18] T. Kohonen, M.R. Schroeder, and T.S. Huang, editors. Self-Organizing Maps, Third Edition. Springer Series in Information Sciences. Springer, 2001.
  • [19] S. Kraus, D. Lehmann, and M. Magidor. Nonmonotonic reasoning, preferential models and cumulative logics. Artificial Intelligence, 44(1-2):167–207, 1990.
  • [20] D. Lehmann and M. Magidor. What does a conditional knowledge base entail? Artificial Intelligence, 55(1):1–60, 1992.
  • [21] D. Lewis. Counterfactuals. Basil Blackwell Ltd, 1973.
  • [22] R. Miikkulainen, J. Bednar, Y. Choe, and J. Sirosh. Computational maps in the visual cortex. Springer, 2002.
  • [23] D. Nute. Topics in conditional logic. Reidel, Dordrecht, 1980.
  • [24] M. Pensel and A. Turhan. Reasoning in the defeasible description logic - computing standard inferences under rational and relevant semantics. Int. J. Approx. Reasoning, 103:28–70, 2018.
  • [25] J. B. Tenenbaum and T. L. Griffiths.

    Generalization, similarity, and bayesian inference.

    Behavioral and Brain Sciences, 24:629–641, 2001.