# Preliminary results on Ontology-based Open Data Publishing

Despite the current interest in Open Data publishing, a formal and comprehensive methodology supporting an organization in deciding which data to publish and carrying out precise procedures for publishing high-quality data, is still missing. In this paper we argue that the Ontology-based Data Management paradigm can provide a formal basis for a principled approach to publish high quality, semantically annotated Open Data. We describe two main approaches to using an ontology for this endeavor, and then we present some technical results on one of the approaches, called bottom-up, where the specification of the data to be published is given in terms of the sources, and specific techniques allow deriving suitable annotations for interpreting the published data under the light of the ontology.

## Authors

• 3 publications
• ### Developing an ontology for the access to the contents of an archival fonds: the case of the Catasto Gregoriano

The research was proposed to exploit and extend the relational and conte...
02/15/2017 ∙ by Lina Antonietta Coppola, et al. ∙ 0

• ### Enriching Ontology-based Data Access with Provenance (Extended Version)

Ontology-based data access (OBDA) is a popular paradigm for querying het...
06/01/2019 ∙ by Diego Calvanese, et al. ∙ 4

• ### Modelling Concurrent Behaviors in the Process Specification Language

In this paper, we propose a first-order ontology for generalized stratif...
07/16/2009 ∙ by Dai Tri Man Le, et al. ∙ 0

• ### Human-Aware Sensor Network Ontology: Semantic Support for Empirical Data Collection

Significant efforts have been made to understand and document knowledge ...
04/06/2017 ∙ by Paulo Pinheiro, et al. ∙ 0

• ### The Landscape of Ontology Reuse Approaches

Ontology reuse aims to foster interoperability and facilitate knowledge ...
11/25/2020 ∙ by Valentina Anita Carriero, et al. ∙ 0

• ### Test-Driven Development of ontologies (extended version)

Emerging ontology authoring methods to add knowledge to an ontology focu...
12/19/2015 ∙ by C. Maria Keet, et al. ∙ 0

• ### SOSA: A Lightweight Ontology for Sensors, Observations, Samples, and Actuators

The Sensor, Observation, Sample, and Actuator (SOSA) ontology provides a...
05/25/2018 ∙ by Krzysztof Janowicz, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In many aspects of our society there is growing awareness and consent on the need for data-driven approaches that are resilient, transparent and fully accountable. But to achieve a data-driven society, it is necessary that the data needed for public goods are readily available. Thus, it is no surprising that in recent years, both public and private organizations have been faced with the issue of publishing Open Data, in particular with the goal of providing data consumers with suitable information to capture the semantics of the data they publish. Significant efforts have been devoted to defining guidelines concerning the management and publication of Open Data. Notably, the W3C111World Wide Web Consortium: https://www.w3.org/ has formed a working group, whose objective is the release of a first draft on Open Data Standards222Data on the Web Best Practice: https://www.w3.org/TR/dwbp/. The focus of the document are areas such as metadata, data formats, data licenses, data quality, etc., which are treated in very general terms, with no reference to any specifical technical methodology. More generally, although there are several works on platforms and architectures for publishing Open Data, there is still no formal and comprehensive methodology supporting an organization in (i) deciding which data to publish, and (ii) carrying out precise procedures for publishing and documenting high-quality data. One of the reasons of this lack of formal methods is that the problem of Open Data Publishing is strictly related to the problem of managing the data within an organization. Indeed, a necessary prerequisite for an organization for publishing relevant and meaningful data is to be able to manage, maintain and document its own information system. The recent paradigm of Ontology-based Data Management (OBDM) [16] (used and experimented in practice in the last years, see, e.g., [3]) is an attempt to provide the principles and the techniques for addressing this challenge. An OBDM system is constituted by an ontology, the data sources forming the information system, and the mapping between the ontology and the sources. The ontology is a formal representation of the domain underlying the information system, and the mapping is a precise specification of the relationship between the data at the sources and the concepts in the ontology.

In this paper we argue that the OBDM paradigm can provide a formal basis for a principled approach to publish high-quality, semantically annotated Open Data. The most basic task in Open Data is the extraction of the correct content for the dataset(s) to be published, where by “content” we mean both the extensional information (i.e., facts about the domain of interest) conveyed by the dataset, and the intensional knowledge relevant to document such facts (e.g., concepts that intensionally describe facts), and “correct” means that the aspect of the domain captured by the dataset is coherent with a requirement formally expressed in the organization.

Current practices for publishing Open Data focus essentially on providing extensional information (often in very simple forms, such as CSV files), and they carry out the task of documenting data mostly by using metadata expressed in natural languages, or in terms of record structures. As a consequence, the semantics of datasets is not formally expressed in a machine-readable form. Conversely, OBDM opens up the possibility of a new way of publishing data, with the idea of annotating data items with the ontology elements that describe them in terms of the concepts in the domain of the organization. When an OBDM is available in an organization, an obvious way to proceed to Open Data publication is as follows: (i) express the dataset to be published in terms of a SPARQL query over the ontology, (ii) compute the certain answers to the query, and (iii) publish the result of the certain answer computation, using the query expression and the ontology as a basis for annotating the dataset with suitable metadata expressing its semantics. We call such method top-down. Using this method, the ontology is the heart of the task: it is used for expressing the content of the dataset to be published (in terms of a query), and it is used, together with the query, for annotating the published data.

Unfortunately, in many organizations (for example, in Public Administrations) it may be the case that people are not ready yet to manage their information systems through the OBDM paradigm. In these cases, the bottom-up approach could be more appropriate. For example, in the Italian Public Administration system, it is very unlikely that local administration people are able to express their queries over the ontology using SPARQL

. Typically, the ontology and the mapping have been designed by third parties, with no or little involvement with IT people responsible of the local administration information system. In other words, these people probably cannot follow the top-down approach, and they are more confident to express the specification of the dataset to be published directly in terms of the source structures (i.e., the relational tables in their databases), or, more generally, in terms of a view over the sources. But how can we automatically publish both the content and the semantics of the dataset if its specification is given in terms of the data sources? We argue that we can achieve this goal by following what we call the

bottom-up approach: the organization expresses its publishing requirement as a query over the sources, and, by using the ontology and the mapping, a suitable algorithm computes the corresponding query over the ontology. With such query at hand, we have reduced the problem in such a way that the top-down approach can now be followed, and the required data can be published according to the method described above. So, at the heart of the bottom-up approach there is a conceptual issue to address:

”Given a query over the sources, which is the query over the ontology that characterizes at best (independently from the current source database)?”

Note that the answer to this question is relevant also for other tasks related to the management of the information system, e.g., the task of explaining the semantics of the various data sources within the organization. The question implicitly refers to a sort of reverse engineering problem, which is a novel aspect in the investigation of both OBDM and data integration. Indeed, most of (if not all) the literature about managing data sources through an ontology (see, e.g., [18, 5]), or more generally, about data integration [15] assume that the user query is expressed over the global schema, and the goal is to find a rewriting (i.e., a query over the source schema) that captures the original query in the best way, independently from the current source database. Here, the problem is reversed, because we start with a source query and we aim at deriving a corresponding query over the ontology, called a source-to-target rewriting.

In this paper we study the above described bottom-up approach, and provide the following contributions.

• We introduce the concept of source-to-target rewriting (see Section 3), the main technical notion underlying the bottom-up approach, and we describe two computation problems related to it, namely the recognition problem, and the finding problem. The former aims at checking whether a query over the ontology is a source-to-target rewriting of a given query over the sources, taking into account the mapping between the sources and the ontology. The latter aims at computing a suitable source-to-target rewriting of a given source query, with respect to the mapping.

• We discuss two different semantics for source-to-target rewritings, one based on the logical models of the OBDM specification, and one based on certain answers. The former is somehow the natural choice, given the first-order semantics behind OBDM. The latter is a significant alternative, that may better capture the intuition of a user who is accustomed to think of query semantics in terms of certain answers.

• We show that, although the ideal notion is the one of “exact” source-to-target rewriting, it is important to resort to approximations to exact rewriting when exactness cannot be achieved. For this reason, we introduce the notion of sound and complete source-to-target rewritings.

• For the case of complete source-to-target rewritings, we present algorithms both for the recognition (Section 4), and for the finding (Section 5) problem, in particular for the setting where the ontology is expressed in , and the queries involved in the specification are conjunctive queries.

## 2 Preliminaries

We assume familiarity with classical databases [1], Description Logics [4], and the OBDM paradigm. In this section, we (i) review the most basic notions of non-ground instances, and their correlation with conjunctive queries; (ii) briefly discuss the chase of a possible non-ground instance; (iii) discuss the relevant aspects of notation we use in the following regarding the OBDM paradigm.

For a possible non-ground instance D, we assume that each value in , i.e., the set of values occurring in D, comes from the union of two fixed disjoint infinite sets: the set Const of all constants, and the set of all labeled nulls. We also let . In particular, each labeled null in a non-ground instance is treated as an unknown value (and hence, an incomplete information), rather than to a non-existent value [20]. Thus, a non-ground instance represents a number of ground instances obtained by assigning constants to each labeled null. More precisely, let D be a non-ground instance, and be a mapping . Then, is called a valuation of D, and we indicate, with , the ground instance obtained from D by replacing elsewhere each labeled null with . We also extend this to tuples, that is, given a tuple of both constants and labeled nulls, with we indicate the tuple , where if is a constant; otherwise ( is a labeled null), . Given an instance D it is possible to construct in linear time a boolean CQ that fully captures it, and vice versa. We also let denoting the transformation of by removing the existential quantification of the variables in . Moreover, given a non-boolean CQ (with as distinguished variables), we associate to it the instance by considering the variables in as if they were existentially quantified. For ease of presentation, we extend CQs to allow also queries of the form and , with their usual meaning. We also denote with the tuple composed by the terms in head of .

Given a source schema ; a target schema ; a set of st-tgds (i.e., assertions of the form , where is a CQ over , and is a CQ over T); and a set of egds (i.e., assertions of the form , where is a CQ over T, and are among the variables in ), the chase procedure of a possibly non-ground source instance D consists in: (i) the chase of D w.r.t. , where, for every st-tgd in and for every pair of tuples such that , there is the introduction of new facts in the instance of the target schema so that holds, where consists in a fresh tuple of distinct labeled nulls coming from an infinite set disjoint from ; (ii) the chase of w.r.t. , where, for every egd and for every tuple such that and , we equate the two terms. Equating with means choosing one of the two so that the other is replaced elsewhere in by the one chosen. In particular, if one is a labeled null and the other is a constant, then the chase choose the constant; if both are labeled nulls, one coming from and the other from , it always choose the one coming from ; if both are constants, then the chase fail. Moreover, with we denote the set of equalities applied by the chase of w.r.t. a set of egds on variables coming from . This can be done by keeping track of the substitution applied by the chase. For example, if the chase equates the variable with the variable , and then equates the variable with the variable , and then with the constant , given the tuple , indicates the tuple . Note that, we can compute the certain answers of a boolean union of CQs (UCQ) with at most one inequality per disjunct by splitting as a boolean UCQ with exactly one inequality per disjunct, and a boolean UCQ with no inequality per disjunct. The key idea is that the negation of consists in a set of egds, hence, the certain answers of can be computed by applying the chase procedure over the instance (i.e., the instance produced by the chase of w.r.t. and ) w.r.t. , where, if the chase fail then the answer is ; otherwise, if the instance produced satisfy one of the conjunctive query in , then the answer is , else the answer is . We refer to [10] for more details.
Given an OBDM specification , where is a TBox, and is a set of st-tgds, and given a non-ground source instance D for , and a set of egds , we denote with , where , the ABox computed as follows: (i) chase the non-ground source instance D w.r.t. ; (ii) freeze the instance (or equivalently, the ABox with variables) obtained, i.e., variables in this instance are now considered as constant. Note that, such ABox may also not exists due to the failing of the chase, in this case, we denote with the symbol .

For an OBDM specification , and for a source database for (i.e., a ground instance over the schema ), we denote by the set of models for relative to such that: (i) ; (ii) . Given a query over , we denote by the set of certain answers to in relative to . It is defined as: if ; otherwise, , where is the set of all possible tuples of constants in whose arity is the one of the query . Furthermore, given a  [5] TBox and a ABox we are able to: (i) check whether is satisfiable by computing the answers of a suitable boolean query (a UCQ with at most one inequality per disjunct) over the ABox considered as a relational database. We see as the union of (the UCQ containing every disjunct not comprising inequalities in ) and (the UCQ containing every disjunct comprising inequalities in ); (ii) compute the certain answers to a UCQ over a satisfiable , denoted with , by producing a perfect reformulation (denoted as a function ) of such query, and then computing the answers of over the ABox considered as a relational database. See [6] for more details.

## 3 The notion of source-to-target rewriting

In what follows, we implicitly refer to (i) an OBDM specification ; (ii) a query over the source schema ; (iii) a query over the ontology .

As we said in the introduction, there are at least two different ways to formally define a source-to-target rewriting (s-to-t rewriting in the following) for each of the three variants, namely “exact”, “complete”, and “sound”. The first one is captured by the following definition.

###### Definition 1

is a complete (resp., sound, exact) s-to-t rewriting of with respect to under the model-based semantics, if for each source database and for each model , we have that (resp., , ).

Intuitively, a complete s-to-t rewriting of w.r.t. under the model-based semantics is a query over that, when evaluated over a model for a source database , returns all the answers of the evaluation of over . In other words, for every source database , the query over captures all the semantics that expresses over . Similar arguments hold for the notions of sound and exact s-to-t rewriting under this semantics. Moreover, from the formal definition of source-to-target rewriting and the usual definition of target-to-source rewriting (simply called rewriting) used in data integration, it is easy to see that is a complete (resp., sound) source-to-target rewriting of w.r.t. under the model-based semantics, if and only if is a sound (resp., complete) rewriting of w.r.t. , implying that, is an exact source-to-target rewriting of w.r.t. under the model-based semantics, if and only if is an exact rewriting of w.r.t. .

The second possible way to formally define a source-to-target rewriting is as follows.

###### Definition 2

is a complete (resp., sound, exact) s-to-t rewriting of with respect to under the certain answers-based semantics, if for each source database such that , we have that (resp., , ).

In this new semantics, in order to capture a query over , we resort to the notion of certain answers. Indeed, a complete s-to-t rewriting of w.r.t. under the certain answers-based semantics is a query over such that, when we compute its certain answers for a source database , we get all the answers of the evaluation of over . As before, similar arguments hold for the notions of sound and exact s-to-t rewriting under this semantics. Note also the strong correspondence between the exact s-to-t rewriting under the certain answers-based semantics and the notion of perfect rewriting. We remind that a perfect rewriting of w.r.t. is a query over that computes for every source database such that  [8]. Indeed, we have that is an exact s-to-t rewriting of w.r.t. under the certain answers-based semantics if and only if is a perfect rewriting of w.r.t. . Note that the above observations imply that the two semantics are indeed different, since it is well-known that the two notions of exact rewriting and perfect rewriting of w.r.t. are different. The difference between the two semantics is confirmed by the following example.

###### Example 1

(i.e., no TBox assertions in ); contains a binary relation and a unary relation ; ; ; .

It is easy to see that is a sound s-to-t rewriting of w.r.t. under the certain answers-based semantics (more precisely, it is an exact s-to-t rewriting of w.r.t. under such semantics), while it is not sound under the model-based semantics. In fact, for the source database with and , and for the model with , we have , and . ∎

Intuitively, for the sound case, the model-based semantics is too strong, in the sense that under such semantics, a model may contain not only facts depending on how data in the source are linked to through , but additionally arbitrary facts, with the only constraint of satisfying . One might think that, in order to address this issue, it is sufficient to resort to a sort of minimizations of the models of . Actually, the above example shows that, even if we restrict the set of models to the set of minimal models (i.e., models such that (i) and (ii) there is no model such that ), and adopt a semantics like the model-based one but restricted to the set of minimal models, is still not a sound s-to-t rewriting (this can be seen considering that the target database defined earlier is a minimal model).

Observe that the above considerations show the difference in the two semantics by referring to sound and exact s-to-t rewritings. It is interesting to ask whether the difference shows up when restricting our attention to complete rewritings. The following proposition deals with this question.

###### Proposition 1

is a complete s-to-t rewriting of with respect to under the model-based semantics if and only if it is so under the certain answers-based semantics.

Proof (Sketch). One direction is trivial. Indeed, when is a complete s-to-t rewriting of with respect to under the model-based semantics, by definition of certain answers, for each source database such that we have that . For the other direction, suppose that is not a complete s-to-t rewriting of w.r.t. under the model-based semantics. It follows that, there exists a source database and a model such that , implying that, , which, in turn, implies that is not a complete s-to-t rewriting of w.r.t. under the certain answers-based semantics. ∎

Obviously, the query over the ontology which captures at best a given query over the source schema is the exact s-to-t rewriting of . However, the following example shows that even for very simple OBDM specifications, an exact s-to-t rewriting of even trivial queries, may not exist.

###### Example 2

(i.e., no TBox assertions in ); contains two unary relations and ; ; .

It is possible to show that the only sound s-to-t rewriting of w.r.t. under both semantics is the query , which is obviously not a complete s-to-t rewriting of w.r.t. neither under the model-based semantics, nor under the certain answers-based semantics. On the other hand, the most immediate and intuitive complete s-to-t rewriting of w.r.t. is the query . Furthermore, as we will see in Section 5, this query is an “optimal” complete s-to-t rewriting of w.r.t. , where the term optimal will be precisely defined. ∎

As we said in the introduction, in the rest of this paper we focus on complete s-to-t-rewritings. In particular, we will address both the recognition problem (see Section 4), and the finding problem (see Section 5) in a specific setting, characterized as follows:

• The ontology in an OBDM specification is expressed as a TBox in .

• The mapping in is a set of GLAV mapping assertions (or, st-tgds), where each assertion expresses a correspondence between a conjunctive query over the source schema and a conjunctive query over the ontology.

• In the recognition problem, both the query over the source schema and the query over the ontology are conjunctive queries. Similarly, in the finding problem, the query over the source schema is a conjunctive query.

## 4 The recognition problem for complete s-to-t rewritings

We implicitly refer to the setting described at the end of the previous section. The recognition problem associated to the complete s-to-t rewriting is the following decision problem: Given an OBDM specification , a query over the source schema , and a query over the ontology , check whether is a complete s-to-t rewriting of with respect to . The next lemma is the starting point of our solution.

###### Lemma 1

is not a complete s-to-t rewriting of with respect to if and only if there is a valuation of and a model such that .

###### Proof

” Suppose that there exists a valuation of and a model such that . Obviously, . It follows that, there exist a source database , a model , and a tuple such that and .

” Suppose that is not a complete s-to-t rewriting of w.r.t. , i.e., there is a source database , a model , and a tuple such that and . The fact that implies the existence of a homomorphism such that . Note also, that since is a ground instance, is a valuation of such that . Obviously, , this can be seen by considering that (i) is true from the supposition that ; and (ii) is true by considering that, (which holds from the supposition that ), , and the queries in are monotone queries. It follows that, there is a valuation of and a model such that . ∎

Relying on the above lemma, we are now ready to present the algorithm CheckComplete for the recognition problem.

The next theorem establishes the correctness of the above algorithm.

###### Theorem 4.1

CheckComplete(, , ) terminates, and returns true if and only if is a complete s-to-t rewriting of w.r.t. .

Proof (Sketch). Termination of the algorithm easily follows by the termination of the chase procedure, and by the obvious termination of computing the certain answers of a CQ over .

For the ”” direction, suppose that the algorithm returns false, i.e., , and . Now, if we extend by considering the freezing of this instance (i.e., variables are now considered as constants), it is easy to see that we obtain a valuation of D such that , and such that . Moreover, the fact that , implies, by the property of certain answers, that there is at least one model , and hence (because ) such that . It follows, from Lemma 1, that is not a complete s-to-t rewriting of w.r.t. .

For the ”” direction, in the cases that or , it is easy to see that for every valuation of D, either the chase of will fail, or every ABox such that and will be such that , implying that, for every valuation of D, . It follows, from Lemma 1, that in this case is a complete s-to-t rewriting of w.r.t. . While, in the cases that , it is easy to see that, for every valuation of D either , or if we compute , we have that . More generally, every obtained by chasing w.r.t. and , and then choosing arbitrary constants for the possible remaining variables, is such that . Hence, for every model such that , we have that . Also, we observe that the set of models coincides with the set of all models such that for all the possible ABox obtained using the above procedure. It follows that, for every possible valuation of D and for every possible , we have that , implying, from Lemma 1, that also in this case is a complete s-to-t rewriting of w.r.t. . ∎

As for complexity issues of the algorithm, we observe: (i) it runs in PTime in the size of . Indeed, computing D (the instance associated to the query ) can be done in linear time, and chasing an instance in the presence of a weakly acyclic set of tgds (as in our case) is PTime in the size of D ( and are considered fixed); (ii) it runs in PTime in the size of . Indeed, and the evaluation of the certain answers of can be both computed in PTime in the size of ; (iii) it runs in ExpTime in the size of . This can be seen from the obvious ExpTime process of transferring data from D to ; (iv) the problem is NP-complete in the size of because computing the certain answers of a UCQ query is NP-complete in the size of the query (query complexity).

## 5 Finding optimal complete s-to-t rewritings

In this section we study the problem of finding optimal complete s-to-t rewritings. The first question to ask is which rewriting we chose in the case where several complete rewritings exist. The obvious choice is to define the notion of “optimal” complete s-to-t rewriting: one such rewriting is optimal if there is no complete s-to-t rewriting that is contained in . In order to formalize this notion, we introduce the following definitions (where denotes the set of models of ).

###### Definition 3

is contained in with respect to , denoted , if for every model we have that . is proper contained in with respect to , denoted , if and for at least one model we have that .

###### Definition 4

is an optimal complete s-to-t rewriting of with respect to , if is a complete s-to-t rewriting of with respect to , and there exists no query such that is a complete s-to-t rewriting of with respect to and .

We are ready to present an algorithm for computing an optimal complete s-to-t rewriting of a query over the source schema.

For the termination and the complexity of this algorithm hold the same considerations done for the termination and the complexity of the CheckComplete algorithm. In particular, FindOptimalComplete(,) terminates, and it runs in (i) PTime in the size of ; (ii) PTime in the size of ; (iii) ExpTime in the size of . Whereas, the correctness is established by the next theorem.

###### Theorem 5.1

FindOptimalComplete(, ) returns an optimal complete s-to-t rewriting of w.r.t. .

Proof (Sketch). When the algorithm returns the query , it is easy to see that, regardless of which is the query , if we run the algorithm CheckComplete(,,) it returns true (also in this case, either the chase will fail, or the ABox produced will satisfy ), and hence, by Theorem 4.1, is a complete s-to-t rewriting of w.r.t. . It follows that, also is a complete s-to-t rewriting, and, by definition of such query, it is an optimal complete s-to-t rewriting of w.r.t. .

When the algorithm returns the query (or , in the case ), if we run the algorithm CheckComplete(,,), it computes the ABox , where holds because corresponds exactly to (before to be freezed) extended with for all terms in not appearing in . It follows that, also in this case, CheckComplete(,,) returns true, implying, from Theorem 4.1, that is a complete s-to-t rewriting of w.r.t. .

We now prove that the query (or , in the case ) is also an optimal complete s-to-t rewriting of w.r.t. . In particular, suppose that there exist a query such that , i.e., , and there is a model and a tuple such that and . The fact that implies the existence of a valuation to all the variables in that makes true in . Note that, we can extend the valuation by assigning a new fresh constant to every variable appearing in D and not appearing in . The valuation obtained is now a valuation for D, and obviously . Moreover, if we apply the same valuation to the instance , it is easy to see that we obtain a ground instance such that (we recall that is the CQ associated to the instance ). Obviously, , and hence, holds because queries in the mapping are monotone queries. Moreover, we also have that (the fact that holds from the initial supposition). Hence, for the source database there is a model and a tuple such that and , implying that, is not a complete s-to-t rewriting of w.r.t. . ∎

It is easy to prove that the query returned by the algorithm is not only an optimal complete s-to-t rewriting of w.r.t. , but it is also the unique (up to equivalence) optimal complete s-to-t rewriting of w.r.t. . Furthermore, the above result implies that an optimal complete s-to-t rewriting of w.r.t. can always be expressed as a CQ.

## 6 Conclusion

We have introduced the notion of Ontology-based Open Data Publishing, whose idea is to use an OBDM specification as a basis for carrying out the task of publishing high-quality open data.

In this paper, we have focused on the bottom-up approach to ontology-based open data publishing, we have introduced the notion of source-to-target rewriting, and we have developed algorithms for two problems related to complete source-to-target rewritings, namely the recognition and the finding problem. We plan to continue our work on several directions. In particular, we plan to investigate the notion of sound rewriting under different semantics. Also, we want to study the top-down approach, especially with the goal of devising techniques for deriving which intensional knowledge to associate to datasets in order to document their content in a suitable way.

## References

• [1] S. Abiteboul, R. Hull, and V. Vianu. Foundations of Databases. 1995.
• [2] F. N. Afrati and P. G. Kolaitis. Answering aggregate queries in data exchange. pages 129–138, 2008.
• [3] N. Antonioli, F. Castanò, S. Coletta, S. Grossi, D. Lembo, M. Lenzerini, A. Poggi, E. Virardi, and P. Castracane. Ontology-based data management for the italian public debt. pages 372–385, 2014.
• [4] F. Baader, D. Calvanese, D. McGuinness, D. Nardi, and P. F. Patel-Schneider, editors. The Description Logic Handbook: Theory, Implementation and Applications. 2003.
• [5] D. Calvanese, G. De Giacomo, D. Lembo, M. Lenzerini, A. Poggi, M. Rodríguez-Muro, and R. Rosati. Ontologies and databases: The DL-Lite approach. volume 5689, pages 255–356. 2009.
• [6] D. Calvanese, G. De Giacomo, D. Lembo, M. Lenzerini, and R. Rosati. Tractable reasoning and efficient query answering in description logics: The DL-Lite family. 39(3):385–429, 2007.
• [7] D. Calvanese, G. De Giacomo, D. Lembo, M. Lenzerini, and R. Rosati. Path-based identification constraints in description logics. pages 231–241, 2008.
• [8] D. Calvanese, G. De Giacomo, M. Lenzerini, and M. Y. Vardi. View-based query processing: On the relationship between rewriting, answering and losslessness. volume 3363, pages 321–336, 2005.
• [9] A. K. Chandra and P. M. Merlin. Optimal implementation of conjunctive queries in relational data bases. pages 77–90, 1977.
• [10] R. Fagin, P. G. Kolaitis, R. J. Miller, and L. Popa. Data exchange: Semantics and query answering. pages 207–224, 2003.
• [11] R. Fagin, P. G. Kolaitis, and L. Popa. Data exchange: Getting to the core. ACM Trans. Database Syst., 30(1):174–210, mar 2005.
• [12] A. Hernich. Answering non-monotonic queries in relational data exchange. pages 143–154, 2010.
• [13] A. Hernich, L. Libkin, and N. Schweikardt. Closed world data exchange. ACM Trans. Database Syst., 36(2):14:1–14:40, 2011.
• [14] T. Imielinski and W. Lipski, Jr. Incomplete information in relational databases. J. ACM, 31(4):761–791, 1984.
• [15] M. Lenzerini. Data integration: A theoretical perspective. pages 233–246, 2002.
• [16] M. Lenzerini. Ontology-based data management. pages 5–6, 2011.
• [17] L. Libkin and C. Sirangelo. Data exchange and schema mappings in open and closed worlds. pages 139–148, 2008.
• [18] A. Poggi, D. Lembo, D. Calvanese, G. De Giacomo, M. Lenzerini, and R. Rosati. Linking data to ontologies. X:133–173, 2008.
• [19] M. Y. Vardi. The complexity of relational query languages. pages 137–146, 1982.
• [20] C. Zaniolo. Database relations with null values. In Proc. of PODS, pages 27–33, 1982.