Formal Concept Analysis (FCA) is a formalism for knowledge representation which is based on the formalization of “concepts” and “concept hierarchies” 
. In traditional philosophy, a concept is considered to be determined by its extent and its intent. The extent contains all entities (e.g., objects, individuals) belonging to the concept while the intent includes all properties common to all entities in the extent. The concept hierarchy states that “a concept is more general if it contains more entities” and is also called a specialization-relation on concepts. FCA lies on the mathematical notions of binary relations, Galois connections and ordered structures and has its roots in the philosophy. It provides methods to extract and display knowledge from databases and has many applications in knowledge representation and management, data mining, and machine learning.
In philosophy, ontology is the study of the categories of things that exist or may exist in a specific domain. In computer science, it is an explicit conceptualization of a given domain in the form of concepts and their relations (roles), as well as concept instances that are linked by relations instantiating generic roles. Roles are usually directed so that a given role maps the instances of a source concept to those of a target one. Ontology design and utilization are presently gaining an increasing interest with the emergence of the Semantic Web , and standardization efforts are progressing in the field of ontological languages such as OWL. Many studies were concerned with ontology construction, mapping and integration [19, 21].
In ontology, a concept can be understood as its FCA-intent (attributes), and the FCA-entities (objects) as instantiations of concepts. One particular relation between concepts represents the is-a hierarchy. This corresponds to the specialization-relation in FCA, and provides a taxonomy on the attributes of the domain of interest. The primary goal of an ontology is to model the concepts and their relations on a domain of interest, whilst FCA aims to discover concepts from a given data set. Within FCA, an interactive method for knowledge acquisition called “attribute exploration” has been developed to discover and express knowledge from a domain of interest with the help of a domain expert [11, 12, 13]. This method has been widely used for ontology engineering and refinement (see Section 7).
FCA and Ontology both use ordered structures to model or manage knowledge. To the best of our knowledge, the work by Cimiano et al.  is the first study that investigated the possible use of Ontology in FCA by first clustering text documents using an ontology and then applying FCA. One recurrent problem in FCA is the huge number of concepts that can be derived from a data set since it may be exponential in the size of the context. How can we handle this problem? Many techniques have been proposed  in order to use or produce a taxonomy on attributes or objects to control the size of the context and the corresponding concept lattice. Another trend is to query pattern bases (e.g., rules and concepts) in a similar way as querying databases  in order to display the patterns that are the most relevant to the user.
Patterns are a concise and semantically rich representation of data 
. These can be clusters, concepts, association rules, decision trees, etc…. In this work we analyze some possible ways to abstract (group) objects/attributes together to get generalized patterns such as generalized itemsets and association rules. The problem we address in this paper is the use of taxonomies on attributes or objects to produce and manipulate generalized patterns.
The rest of this contribution is organized as follows. In Section 2 we introduce the basic notions of FCA. Section 3 presents different ways to group attributes/objects to produce generalized patterns. In Section 4 we discuss line diagrams of generalized patterns while in Section 5 the size of the generalized concept set is compared to the size of the initial (before generalization) concept set. Some experimental results are shown in Section 6. Finally, existing work about combining FCA with Ontology is briefly described in Section 7.
2 Formal Concept Analysis and Data Mining
2.1 Elementary information systems, contexts and concepts
The elementary way to encode information is to describe, by means of a relation, that some objects have some properties. Figure 1 (left) describes items that appear in eight transactions of a market basket analysis application. Such a setting defines a binary relation between the set of objects/transactions and the set of properties/items. The triple is called a formal context. In Subsection 2.4, we will see how to convert data from different formats (many-valued contexts) to binary contexts. When an object is in relation with an attribute , we write or .
Some interesting patterns are formed by objects sharing the same properties. In data mining applications, many techniques are based on the formalization of such patterns, namely that of concepts. A concept is defined by its extent (all entities belonging to this concept) and its intent (all attributes common to all objects of this concept).
In a formal context a formal concept is a pair such that is exactly the set of all properties shared by the objects in and is the set of all objects that have all the properties in . We set and . Then is a concept of iff and . The extent of the concept is and its intent . We denote by , and the set of concepts, intents and extents of the formal context , respectively. A subset is closed if , where denotes . Closed subsets of are exactly extents and closed subsets of are intents of .
In basket market analysis and association rule mining framework, the set of objects is usually the set of transactions (or customers), the set of attributes is the set of bought items (or products) and itemsets are subsets of . The support of an itemset is defined by
. Itemsets can be classified with respect to a thresholdso that an itemset is frequent if . One advantage of using FCA in data mining is that it reduces the computation of frequent itemsets to the frequent closed itemsets (i.e. frequent intents) only (see [22, 23, 31, 33, 36]). Note that , and subsets of frequent itemsets are frequent. Then all frequent itemsets can be deduced from the close ones.
There is a hierarchy between concepts stating that a concept is more general than a concept if its extent is larger than the extent of or equivalently if its intent is smaller than the intent of . The concept hierarchy is formalized with a relation defined on by . This is an order relation, and is also called a specialization/generalization-relation on concepts. In fact, the concept is called a specialization of the concept , or that the concept is a generalization of the concept , whenever holds.
For any list of concepts of , there is a concept of that is more general than every concept in and more specific than every concept more general than every concept in (i.e. is the supremum of , usually denoted by ), and there is a concept of that is a specialization of every concept in and a generalization of every specialization of all concepts in (i.e. is the infimum of , also denoted by )111If is a two-element set , we write and for its supremun and its infimum. Then every subset of has an infimum and a supremum. Hence, is a complete lattice, called the concept lattice of the context . Recall that a lattice is an algebra of type such that and are idempotent, commutative, associative and satisfy the absorption laws: and . It is complete if every subset has an infimum and a supremum.
For and we set and . The object concepts and the attribute concepts form the “building blocks” of . In fact, every concept of is a supremum of some ’s and infimum of some ’s222For we have .. Thus, the set is -dense and the set is -dense in .
The basic theorem on formal concept analysis is given below.
 The set of all concepts of a formal context ordered by the specialization/generalization-relation forms a complete lattice, in which infimum and supremum are given by
Conversely, a complete lattice is isomorphic to a concept lattice of a context iff there are maps and such that is -dense in , is -dense in and .
Many research studies in FCA have focused on the design and implementation of efficient algorithms for computing the set of concepts. The number of concepts can be extremely large, even exponential in the size of the context333A context of size can have up to concepts.. So how are such large sets of concepts handled? Many techniques have been proposed , based on context decomposition or lattice pruning/reduction (atlas decomposition, direct or subdirect decomposition, iceberg concept lattices, nested line diagrams, …).
2.2 Labeled line diagrams of concept lattices
One of the strengths of FCA is the ability to pictorially display knowledge , at least for contexts of reasonable size. Finite concept lattices can be represented by labeled Hasse diagrams (see Figure 1). Each node represents a concept. The label is written underneath of and above . The extent of a concept represented by a node is given by all labels in from the node downwards, and the intent by all labels in from upwards. For example, the label in the right side of Figure 1 represents the object concept . On the right of the node labeled by , there is a node with no label (between nodes labeled by and ). It represents the concept . Diagrams are valuable tools for visualizing data. However drawing a good diagram is a big challenge. The concept lattice can be of very large size and have a complex structure. Therefore, we need tools to “approximate” the output by reducing the size of the input, making the structure nicer or exploring the diagram layer upon layer. For the last case, FCA offers nested line diagrams as a means to visualize the concepts level-wise.
Assume that we want to examine a context where is a large set. We can split into two sets and and consider the subcontexts and , where and . The subsets and need not be disjoint. The only requirement is that . The idea is to have a view of the structure restricted to the attributes in , and then refine with the attributes in to get the whole view. Therefore, we construct the lattices and , that are of smaller size than , and combine them to get . The extents of are exactly the intersections of extents of and . We first draw a line diagram for (which corresponds to a rough view), with each node large enough to contain a copy of the line diagram of . Afterwards, we insert a copy of the line diagram of in each node of the line diagram of and mark on these copies only the nodes that are effectively concepts of . The constructed diagram is called a nested line diagram, and its illustration shown in Figure 5 was produced with ToscanaJ444http://toscanaj.sourceforge.net.
2.3 Implications and association rules from contexts
The knowledge extracted from a formal context and its corresponding concept lattice can also be displayed in the form of association rules (including implications). Let be a set of properties or attributes. An association rule  between attributes in is a pair , denoted by where is its premise and its conclusion. The support of a rule is defined by , and its confidence . A rule is a valid implication in a context if every object having all the attributes in also has all the attributes in . A rule is strong in with respect to the thresholds and , if is a frequent itemset and . In Apriori-like algorithms , rule extraction is done in two steps: detection of all frequent itemsets, and utilization of frequent itemsets to generate association rules that have a confidence . While the second step is relatively easy and cost-effective, the first one presents a great challenge because the set of frequent itemsets may grow exponentially with the whole set of items. One substantial contribution of FCA in association rule mining is that it speeds up the computation of frequent itemsets and association rules by concentrating only on closed itemsets [22, 23, 31, 33, 36] and by computing minimal rule sets such as Guigues-Duquenne basis . Another solution to the problem of the overwhelming number of rules is to extract generalized association rules using a taxonomy on items . Before we move to generalized patterns, let us see how data are transformed into binary contexts, the suitable format for our data.
2.4 Information Systems
Frequently, data are not directly encoded in a “binary” form, but rather as a many-valued context in the form of a tuple of sets such that , with and imply . is called the set of objects, the set of attributes (or attribute names) and the set of attribute values. If , then is the value of the attribute for the object . Another notation is where is a partial map from to . Many-valued contexts can be transformed into binary contexts, via conceptual scaling. A conceptual scale for an attribute of is a binary context such that . Intuitively, discretizes or groups the attribute values into , and describes how each attribute value is related to the elements in . For an attribute of and a conceptual scale we derive a binary context with , where . This means that an object is in relation with a scaled attribute iff the value of on is in relation with in . With a conceptual scale for each attribute we get the derived context where and . In practice, the set of objects remains unchanged; each attribute name is replaced by the scaled attributes . An information system is a many-valued context with a set of scales . The choice of a suitable set of scales depends on the interpretation, and is usually done with the help of a domain expert. A Conceptual Information System is a many-valued context together with a set of conceptual scales (called conceptual schema) [26, 29]. Other scaling methods have also been proposed (see for e.g., [24, 25]). The methods presented in Section 3 are actually a form of scaling.
3 Generalized Patterns
In the field of data mining, generalized patterns represent pieces of knowledge extracted from data when an ontology is used. In this paper, we focus on exploiting generalization hierarchies attached to properties (and even objects) to get a lattice with more abstract concepts. Producing generalized patterns from concept lattices when a taxonomy on attributes is provided can be done in different ways with distinct performance costs that depend on the peculiarities of the input (e.g., size, density) and the operations used.
In the following we formalize the way generalized patterns are produced. Let be a context. The attributes of can be grouped together to form another set of attributes, namely , to get a context where the attributes are more general than in . For the basket market analysis example, items/products can be generalized into product lines and then product categories. The context is then replaced with a context as in the scaling process where can be seen as an index set such that covers . We will usually identify the group with the index .
There are mainly three ways to express the binary relation between the objects of and the (generalized) attributes of :
. Consider an information table describing companies and their branches in North America. We first set up a context whose objects are companies and whose attributes are the cities where these companies have or may have branches. If there are too many cities, we can decide to group them into provinces (in Canada) or states (in USA) to reduce the number of attributes. Then, the (new) set of attributes is now a set whose elements are states and provinces. It is quite natural to state that a company has a branch in a province/state if has a branch in a city which belongs to the province/state . Formally, has attribute iff there is such that has attribute .
. Consider an information system about Ph.D. students and the components of the comprehensive exam (CE). Assume that components are: the written part, the oral part, and the thesis proposal, and that a student succeeds in his exam if he succeeds in the three components of that exam. The objects of the context are Ph.D. students and the attributes are the different exams taken by students. If we group together the different components, for example
then it becomes natural to state that a student succeeds in his comprehensive exam if he succeeds in all the exam parts of . i.e has attribute if for all in , has attribute .
where is a threshold set by the user for the generalized attribute . This case generalizes the -case () and the -case (). To illustrate this case, let us consider a context describing different specializations in a given Master degree program. For each program there is a set of mandatory courses and a set of optional ones. Moreover, there is a predefined number of courses that a student should succeed to get a degree in a given specialization. Assume that to get a Master in Computer Science with a specialization in “computational logic”, a student must have seven courses from a set of mandatory courses and three courses from a set of optional ones. Then, we can introduce two generalized attributes and so that a student succeeds in the group if he succeeds in at least seven courses from , and succeeds in if he succeeds in at least three courses from . So, , , and
Attribute generalization reduces the number of attributes. One may therefore expect a reduction of the number of concepts (i.e., ). Unfortunately, this is not always the case, as we can see from example in Figure 9. Therefore, it is interesting to investigate under which condition generalizing patterns leads to a “generalized” lattice of smaller size than the initial one (see Section 5). Moreover, finding the connections between the implications and more generally association rules of the generalized context and the initial one is also an important problem to be considered.
As an illustration, the contexts where (see Figure 2) and with (see Figure 3) are obtained from the context shown in Figure 1 with the same grouping on attributes of , namely , , and . However, we need different names for the same groups, depending on whether they are in or in , since (which means that or , i.e. an -generalization) has a meaning different from (which means that and , i.e. a -generalization).
If data represent customers (transactions) and items (products), the usage of a taxonomy on attributes leads to new useful patterns that could not be seen before generalizing attributes. For example, the -case (see Figure 2) helps the user acquire the following knowledge:
Customer (at the bottom of the lattice) buys at least one item from each product line
Whenever a customer buys at least one item from the product line , then he/she buys at least one item from the product line .
From the -case in Figure 3, one may learn for example that Customers and have distinct behaviors in the sense that the former buys at least all the items of the product lines and while the latter purchases at least all the items of the product lines and .
To illustrate the -case, we put the attributes of in three groups , and and set for all groups. This -generalization on the attributes of is presented in Figure 4. Note that if all groups have two elements, then any -generalization would be either an -generalization () or a -generalization (). From the lattice in Figure 4 one can see that any transaction involving at least of items in necessarily includes at least of items in . Moreover, the product line seems to be the most popular among the four groups since five (out of eight) customers bought at least of items in .
Generalization can also be conducted on objects to replace some (or all) of them with generalized objects. A typical situation would be that of two or more customers forming a new group (e.g., a same residence location, a same profile). We can also assign to each group all items bought by their members (an -generalization) or only their common items (a -generalization), or just some of the frequent items among their members (similar to an -generalization).
In order to reduce the size of the data to be analyzed, both techniques can apply: generalizing attributes and then objects or vice-versa or simultaneously. This can be seen as pre-processing data in order to reduce them and then have a more abstract perspective over them. Done simultaneously, i.e., combining generalizations on attributes and on objects, will give a kind of hypercontext (similar to hypergraphs ), since the objects are subsets of and attributes are subsets of . Let be a group of objects and be a group of attributes related to a context . Then, the relation can be defined using one or a combination of the following cases:
iff , such that , i.e. some objects from the group are in relation with some attributes in the group ;
iff , , i.e. every object in the group is in relation with every attribute in the group ;
iff , such that , i.e. every object in the group has at least one attribute from the group ;
iff such that , i.e. there is an attribute in the group that belongs to all objects of the group ;
iff , such that , i.e. every property in the group is satisfied by at least one object of the group ;
iff such that , there is an object in the group that has all the attributes in the group ;
iff , i.e. at least of objects in the group have each at least of the attributes in the group ;
iff , i.e. at least of attributes in the group belong altogether to at least of objects in the group ;
iff , i.e. the density of the rectangle is at least equal to .
The cases and generalize Case (take , for all and ), Case (take , for all and ). Moreover Case also generalizes Case (take , for all and ) and Case (take , for all and ). However, Cases and cannot be captured by Case , but are captured by Case (take , for all and to get Case , and take , for all and to get Case ).
In most cases, a taxonomy is provided either implicitly or explicitly. Let be an ontology on a domain . We denote by the concepts of and by a taxonomy induced by the is-a hierarchy of . Then, is a quasi-order since two concepts can be equivalent (but not identical in the domain). We can assume that is a complete lattice by taking the Dedekind-MacNeille completion of its quotient with respect to the quasi-order. Let be a context such that the attributes in are represented by some concepts in . If only some attributes of are represented in , then we replace by . The attributes in then appear in at some level. An -generalization is simulated by going one or more levels upward in the taxonomy and a -generalization is obtained by going one or more levels downward in . How many levels should the user follow to get the knowledge he is expecting?
We consider for example a data mining context , where is the set of transactions and the set of items. With an -generalization, some items that were non frequent can become frequent. One possibility is to keep the items (attributes in ) that are frequent and put the non frequent ones in groups (according to a certain semantics) so that at least a certain percentage of transactions contains at least one object from each group. This can be done through an interactive program which suggests some groupings to the user for validation and feedback. If no taxonomy is provided, one may be interested or forced to derive a taxonomy from data, that will be used afterwards to get generalized patterns. How can this be achieved?
4 Visualizing generalized patterns on line diagrams
Let be a formal context and a context obtained from via a generalization on attributes. The usual action is to directly construct a line diagram of which contains concepts with generalized attributes. (See Figures 2, 3 and 4). However, one may be interested, after getting and constructing a line diagram for , to refine further on the attributes in or recover the lattice constructed from .
When storage space is not a constraint, then the attributes in and the generalized attributes can be kept altogether. This is done using an apposition of and to get . A nested line diagram can be used to display the resulting lattice, with as first level and as second level; i.e. we construct a line diagram for with nodes large enough to contain copies of the line diagram of . Figure 5 displays the nested line diagram of the context in Figure 3 with the generalized attributes at the first level and the attributes in at the inner one.
The generalized patterns can also be visualized by conducting a projection (i.e., a restricted view) on generalized attributes, and keeping track of the effects of the projection, i.e, we display the projection of the concept lattice on by marking the equivalence classes on . Note that two concepts and are equivalent with respect to the projection on iff (i.e. their intents have the same restriction on ). This is illustrated by Figure 6.
4.2 Are generalized attributes really generalizations?
Let us have a close look at the concept lattice . Recall that a concept is more general than a concept , if contains more objects than . That is, , or , or . We also state that is a generalization of , and a specialization of . For two attributes and in , we should normally assert that is a generalization of or is a specialization of whenever is a generalization of . Now, let us have a close look at the three cases of attribute generalization.
In the -case (see the left hand-side of Figure 7), an object is in relation with an attribute iff there is such that . Thus and . Therefore, every -generalized attribute satisfies for all , and deserves the name of a generalization of the attributes ’s, .
In the -case (see the right hand-side of Figure 7), an object is in relation with an attribute iff for all . Thus and . Therefore, every -generalized attribute satisfies for all , and should normally be called a specialization of the attributes ’s, .
In the -case, , an object is in relation with an attribute iff . The following situations can happen:
There is an -generalized attribute with at least one attribute such that and ; hence in ; i.e is not a generalization of , and by then not a generalization of the ’s, .
There is an -generalized attribute with at least one attribute such that and ; hence in ; i.e is not a specialization of , and by then not a specialization of the ’s, .
Therefore, there are -generalized attributes that are neither a generalization of the ’s nor a specialization of the ’s. In Figure 8, the element belongs to the group , but is neither a specialization nor a generalization of , since and . Thus, we should better call the -case an attribute approximation, the -case a specialization and only the -case a generalization.
5 Controlling the size of generalized concepts
A generalized concept is a concept whose intent (or extent) contains generalized attributes (or objects). Let us first introduce the example in Figure 9 in which a -generalization leads to a generalized concept set larger than the number of initial concepts. The two concepts and will be put together. Although we discard the attributes and , the nodes and will remain since they will be obtained as and respectively. Then we get the configuration on Figure 9 (right) which has one concept more than the initial concept lattice shown in the left of the same figure.
In the following, we analyze the impact of and attribute generalizations on the size of the resulting set of generalized concepts.
5.1 An -generalization on attributes
Let be a context and a context obtained from an -generalization on attributes, i.e the elements of are groups of attributes from . We set , with . Then, an object is in relation with a generalized attribute if there is an attribute in such that . To compare the size of the corresponding concept lattices, we can define some mappings. We assume that forms a partition of . Then for each there is a unique generalized attribute such that , and implies , for every . To distinguish between derivations in and in , we will replace by the name of the corresponding relation. For example and . Two canonical maps and are defined as follows:
The maps and induce two order preserving maps and (see ) defined by
If or is surjective, then the generalized context is of smaller cardinality. As we have seen on Figure 9 these maps can be both not surjective. Obviously since implies and . When do we have the equality? Does the equality imply surjectivity?
Now we present some special cases where the number of concepts does not increase after a generalization.
- Case 1
Every has a greatest element . Then the context is a projection of on the set of greatest elements of . Thus and is a sub-order of . Hence .
- Case 2
The union is an extent, for any . Then any grouping does not produce a new concept. Hence the number of concepts cannot increase.
The following result (Theorem 5.1) gives an important class of lattices for which the -generalization does not increase the size of the lattice. We recall that a lattice is distributive if for and in , we have . A context is object reduced if no row can be obtained as the intersection of some other rows.
The -generalizations on distributive concept lattices whose contexts are object reduced decrease the size of the concept lattice.
Let be an object reduced context such that is a distributive lattice. Let be a context obtained by an -generalization on the attributes in . Let be a generalized attribute, i.e. a group of attributes of . It is enough to prove that is an extent of . By definition, we have
Let . We have . Thus
Therefore , and . This proves that , and .
The above discussed cases are not the only ones where the size does not increase. For example if we conduct the groupings of attributes one after another, and each intermediate state does not increase the size of the lattice, or the overall number of new concepts is less than the deleted concepts in the whole process, then the lattice of generalized concepts is of smaller size (see the empirical study in Section 6).
5.2 A -generalization on attributes
Let be a context obtained from by a -generalization. In the context , each attribute concept is reducible. This means that , and is an extent of . Therefore, .
The -generalizations on attributes reduce the size of the concept lattice.
We conducted our experimentation over 100 synthetic contexts with various sizes. The number of objects ranges from 50 to 10 000 instances and the number of attributes ranges from 25 to 150 elements. The number of concepts of the generated contexts ranges from 70 thousands to 850 millions concepts. Obviously, producing and displaying such a huge set of concepts is very time-consuming and even impossible.
In our experiments, the fanout, i.e. the number of simple attributes per generalized attribute, varies from 2 to 20 and was simulated by grouping randomly the attributes two by two, three by three and so on. For each fanout value and for each context, the new generalized context is computed and the number of generalized concepts is calculated using Concept Explorer555http://conexp.sourceforge.net to compute the number of generalized concepts. We summarize the results of the experimentation in the figures below. In Figure 11, we can see that the generalization process does not only reduce the context size but can also considerably reduce the size of the corresponding lattice. Moreover, the number of generalized concepts is almost inversely proportional to the fanout. However, one can see from Figure 11-(b) and (d) that when the fanout is equal to 2, then the number of generalized concepts can be greater than the number of original concepts. Figure 11 summarizes the lattice reduction as a ratio between the number of original concepts and the number of generalized ones. We can notice in Figure 11-(b) that the reduction is neither linear nor proportional to the fanout but can be very significant. Indeed, with an attribute grouping of size 10 a ratio of 37722 is obtained. This means that the size of the original concept set is almost forty thousands times the number of generalized concepts, and hence there is a significant reduction in the size of the generalized lattice.
7 Related work
There are a set of studies [3, 7, 8, 9, 10, 15, 17, 30, 32] about the possible collaborations between formal concept analysis and ontology engineering (e.g., ontology merging and mapping) to let the two formalisms benefit from each other strengths. Starting from the fact that both domain ontologies and FCA aim at modeling concepts,  show how FCA can be exploited to support ontology engineering (e.g., ontology construction and exploration), and conversely how ontologies can be fruitfully used in FCA applications (e.g., extracting new knowledge). In , the authors propose a bottom-up approach called
for merging ontologies using a set of documents as input. The method relies on techniques from natural language processing and FCA to produce a lattice of concepts. The approach has three steps: (i) the linguistic analysis of the input which returns two formal contexts, (ii) the merge of the two contexts and the computation of the pruned concept lattice, and (iii) the semi-automatic ontology creation phase which relies partially on the user’s interaction. The two formal contexts produced at Step 1 are of the formwhere , is a set of documents, is the set of concepts of Ontology found in , and is a binary relation between and . Starting from a set of domain specific texts,  proposes a semi-automatic method for ontology extraction and design based on FCA and Horn clause model.  studies the role of FCA in reusing independently developed domain ontologies. To that end, an ontology-based method for evaluating similarity between FCA concepts is defined to perform some Semantic Web activities such as ontology merging and ontology mapping. In  an approach towards the construction of a domain ontology using FCA is proposed. The resulting ontology is represented as a concept lattice and expressed via the Semantic Web Rule Language (SWRL) to facilitate ontology sharing and reasoning.
Ontology mapping  is seen as one of the key techniques for data integration (and mediation) between databases with different ontologies. In , a method for ontology mapping, called FCA-Mapping, is defined based on FCA and allows the identification of equal and subclass mapping relations. In , FCA is also used to propose an ontology mediation method for ontology merging. The resulting ontology includes new concepts not originally found in the input ontologies but excludes some redundant or irrelevant concepts.
Since ontologies describe concepts and relations between them,  have handled the problem of mining relational data sets in the framework of FCA and proposed an extension to FCA called relational concept analysis. Relational data sets are collections in which objects are described both by their own attributes/properties and by their links with other objects.
In this paper we have studied the problem of using a taxonomy on objects and/or attributes in the framework of formal concept analysis under three main cases of generalization (, , and ) and have shown that (i) the set of generalized concepts is generally smaller than the set of patterns extracted from the original set of attributes (before generalization), and (ii) the generalized concept lattice not only embeds new patterns on generalized attributes but also reveals particular features of objects and may unveil a new taxonomy on objects. A careful analysis of the three cases of attribute generalization led to the following conclusion: the -case is an attribute approximation, the -case is an attribute specialization while only the -case is actually an attribute generalization. Different scenarios of a simultaneous generalization on objects and attributes are also discussed based on the three cases of generalization.
Since we focused our analysis on the integration of taxonomies in FCA to produce generalized concepts, our further research concerns the theoretical study of the mapping between a rule set on original attributes and a rule set of generalized attributes as well as the exploitation of other components of a domain ontology such as general links (other than is-a hierarchies) between generic concepts or their instances.
-  Mehdi Adda, Petko Valtchev, Rokia Missaoui, and Chabane Djeraba. Toward recommendation based on ontology-powered web-usage mining. IEEE Internet Computing, 11(4):45–52, 2007.
-  Rakesh Agrawal and Ramakrishnan Srikant. Fast algorithms for mining association rules in large databases. In VLDB, pages 487–499, 1994.
-  Rokia Bendaoud, Amedeo Napoli, and Yannick Toussaint. Formal concept analysis: A unified framework for building and refining ontologies. In EKAW, pages 156–171, 2008.
-  C. Berge. Graphs and Hypergraphs. Elsevier, Amsterdam, The Netherlands, 1976.
-  T. Berners-Lee, J. Hendler, and O. Lassila. The semantic web. Scientific American, May 2001.
-  Elisa Bertino, Barbara Catania, and Anna Maddalena. Towards a language for pattern manipulation and querying. In PaRMa, 2004.
-  Philipp Cimiano, Andreas Hotho, Gerd Stumme, and Julien Tane. Conceptual knowledge processing with formal concept analysis and ontologies. In ICFCA, pages 189–207, 2004.
-  Olivier Curé and Robert Jeansoulin. An fca-based solution for ontology mediation. In ONISW ’08: Proceeding of the 2nd international workshop on Ontologies and nformation systems for the semantic web, pages 39–46, New York, NY, USA, 2008. ACM.
-  Liya Fan and Tianyuan Xiao. An automatic method for ontology mapping. In Knowledge-Based Intelligent Information and Engineering Systems, pages 661–669, 2007.
-  Anna Formica. Ontology-based concept similarity in formal concept analysis. Inf. Sci., 176(18):2624–2641, 2006.
-  Bernhard Ganter. Algorithmen zur formalen begriffsanalyse. In Rudolf Wille Bernhard Ganter and Karl Erich Wolf, editors, Beiträge zur Begriffsanalyse, pages 196–212. Wissenschaftsverlag, Mannheim, 1987.
-  Bernhard Ganter. Attribute exploration with background knowledge. Theoretical Computer Science, 217:215–233, 1999.
-  Bernhard Ganter and Rudolf Wille. Implikationen und abhängigkeiten zwischen merkmalen. Technical Report 1017, TH Darmstadt, 1986.
-  Bernhard Ganter and Rudolf Wille. Formal Concept Analysis: Mathematical Foundations. Springer-Verlag New York, Inc., 1999. Translator-C. Franzke.
-  Hele-Mai Haav. A semi-automatic method to ontology design by using fca. In CLA, 2004.
-  Marianne Huchard, Mohamed Rouane Hacene, Cyril Roume, and Petko Valtchev. Relational concept discovery in structured datasets. Ann. Math. Artif. Intell., 49(1-4):39–76, 2007.
-  Suk hyung Hwang, Hong-Gee Kim, and Hae Sool Yang. A fca-based ontology construction for the design of class hierarchy. In ICCSA (3), pages 827–835, 2005.
-  Guigues J. L. and Duquenne V.\̇lx@bibnewblockFamilles minimales d’implications informatives résultant d’un tableau de données binaires. Mathématiques et Sciences Humaines, (95), 1986.
-  Yannis Kalfoglou and Marco Schorlemmer. Ontology mapping: The state of the art. In Y. Kalfoglou, M. Schorlemmer, A. Sheth, S. Staab, and M. Uschold, editors, Semantic Interoperability and Integration, number 04391 in Dagstuhl Seminar Proceedings. Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), Schloss Dagstuhl, Germany, 2005. http://drops.dagstuhl.de/opus/volltexte/2005/40 [date of citation: 2005-01-01].
-  Rokia Missaoui, Léonard Kwuida, Mohamed Quafafou, and Jean Vaillancourt. Algebraic operators for querying pattern bases. CoRR, abs/0902.4042, 2009.
-  Natalya Fridman Noy. Semantic integration: A survey of ontology-based approaches. SIGMOD Record, 33(4):65–70, 2004.
-  Nicolas Pasquier, Yves Bastide, Rafik Taouil, and Lotfi Lakhal. Discovering frequent closed itemsets for association rules. In Catriel Beeri and Peter Buneman, editors, ICDT, volume 1540 of Lecture Notes in Computer Science, pages 398–416. Springer, 1999.
-  Nicolas Pasquier, Yves Bastide, Rafik Taouil, and Lotfi Lakhal. Efficient mining of association rules using closed itemset lattices. Inf. Syst., 24(1):25–46, 1999.
-  Susanne Prediger. Logical scaling in formal concept analysis. In Dickson Lukose, Harry S. Delugach, Mary Keeler, Leroy Searle, and John F. Sowa, editors, ICCS, volume 1257 of Lecture Notes in Computer Science, pages 332–341. Springer, 1997.
-  Susanne Prediger and Gerd Stumme. Theory-driven logical scaling. conceptual information systems meet description logics. In Proc. 6th Intl. Workshop Knowledge Representation Meets Databases, Heidelberg. CEUR Workshop Proc, pages 46–49, 1999.
-  P. Scheich, M. Skorsky, F. Vogt, C. Wachter, and R. Wille. Conceptual data systems. In O. Opitz, B. Lausen, and R. Klar, editors, Information and Classification, pages 72–84. Springer, Berlin-Heidelberg, 1993.
-  R. Srikant and R. Agrawal. Mining generalized association rules. In Proc. Of the 21st VLDB Conference, Zurich, Switzerland, pages 407–419, 1995.
-  R. Srikant and R. Agrawal. Mining sequential patterns: Generalizations and performance improvements. Proc. 5th Int. Conf. Extending Database Technology, EDBT, Avigon, France, 1057:3–17, 1996.
-  Gerd Stumme. Conceptual on-line analytical processing. In Information Organization and Databases, pages 191–203. Kluwer, 2002.
-  Gerd Stumme and Alexander Maedche. FCA-MERGE: Bottom-up merging of ontologies. In IJCAI, pages 225–234, 2001.
-  Gerd Stumme, Rafik Taouil, Yves Bastide, Nicolas Pasquier, and Lotfi Lakhal. Computing iceberg concept lattices with t. Data Knowl. Eng., 42(2):189–222, 2002.
-  Jian Wang and Keqing He. Towards representing fca-based ontologies in semantic web rule language. In CIT ’06: Proceedings of the Sixth IEEE International Conference on Computer and Information Technology, page 41, Washington, DC, USA, 2006. IEEE Computer Society.
-  Jianyong Wang, Jiawei Han, and Jian Pei. Closet+: searching for the best strategies for mining frequent closed itemsets. In KDD ’03: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 236–245, New York, NY, USA, 2003. ACM.
-  R. Wille. Restructuring lattice theory: An approach based on hierarchies of concepts. In Ordered Sets in I. Rivals (Ed.), volume 23, 1982.
-  Rudolf Wille. Why can concept lattices support knowledge discovery in databases? J. Exp. Theor. Artif. Intell., 14(2-3):81–92, 2002.
-  Mohammed J. Zaki and Ching-Jui Hsiao. Efficient algorithms for mining closed itemsets and their lattice structure. IEEE Transactions on Knowledge and Data Engineering, 17(4):462–478, 2005.