1 Introduction
In many aspects of our society there is growing awareness and consent on the need for datadriven approaches that are resilient, transparent and fully accountable. But to achieve a datadriven society, it is necessary that the data needed for public goods are readily available. Thus, it is no surprising that in recent years, both public and private organizations have been faced with the issue of publishing Open Data, in particular with the goal of providing data consumers with suitable information to capture the semantics of the data they publish. Significant efforts have been devoted to defining guidelines concerning the management and publication of Open Data. Notably, the W3C^{1}^{1}1World Wide Web Consortium: https://www.w3.org/ has formed a working group, whose objective is the release of a first draft on Open Data Standards^{2}^{2}2Data on the Web Best Practice: https://www.w3.org/TR/dwbp/. The focus of the document are areas such as metadata, data formats, data licenses, data quality, etc., which are treated in very general terms, with no reference to any specifical technical methodology. More generally, although there are several works on platforms and architectures for publishing Open Data, there is still no formal and comprehensive methodology supporting an organization in (i) deciding which data to publish, and (ii) carrying out precise procedures for publishing and documenting highquality data. One of the reasons of this lack of formal methods is that the problem of Open Data Publishing is strictly related to the problem of managing the data within an organization. Indeed, a necessary prerequisite for an organization for publishing relevant and meaningful data is to be able to manage, maintain and document its own information system. The recent paradigm of Ontologybased Data Management (OBDM) [16] (used and experimented in practice in the last years, see, e.g., [3]) is an attempt to provide the principles and the techniques for addressing this challenge. An OBDM system is constituted by an ontology, the data sources forming the information system, and the mapping between the ontology and the sources. The ontology is a formal representation of the domain underlying the information system, and the mapping is a precise specification of the relationship between the data at the sources and the concepts in the ontology.
In this paper we argue that the OBDM paradigm can provide a formal basis for a principled approach to publish highquality, semantically annotated Open Data. The most basic task in Open Data is the extraction of the correct content for the dataset(s) to be published, where by “content” we mean both the extensional information (i.e., facts about the domain of interest) conveyed by the dataset, and the intensional knowledge relevant to document such facts (e.g., concepts that intensionally describe facts), and “correct” means that the aspect of the domain captured by the dataset is coherent with a requirement formally expressed in the organization.
Current practices for publishing Open Data focus essentially on providing extensional information (often in very simple forms, such as CSV files), and they carry out the task of documenting data mostly by using metadata expressed in natural languages, or in terms of record structures. As a consequence, the semantics of datasets is not formally expressed in a machinereadable form. Conversely, OBDM opens up the possibility of a new way of publishing data, with the idea of annotating data items with the ontology elements that describe them in terms of the concepts in the domain of the organization. When an OBDM is available in an organization, an obvious way to proceed to Open Data publication is as follows: (i) express the dataset to be published in terms of a SPARQL query over the ontology, (ii) compute the certain answers to the query, and (iii) publish the result of the certain answer computation, using the query expression and the ontology as a basis for annotating the dataset with suitable metadata expressing its semantics. We call such method topdown. Using this method, the ontology is the heart of the task: it is used for expressing the content of the dataset to be published (in terms of a query), and it is used, together with the query, for annotating the published data.
Unfortunately, in many organizations (for example, in Public Administrations) it may be the case that people are not ready yet to manage their information systems through the OBDM paradigm. In these cases, the bottomup approach could be more appropriate. For example, in the Italian Public Administration system, it is very unlikely that local administration people are able to express their queries over the ontology using SPARQL
. Typically, the ontology and the mapping have been designed by third parties, with no or little involvement with IT people responsible of the local administration information system. In other words, these people probably cannot follow the topdown approach, and they are more confident to express the specification of the dataset to be published directly in terms of the source structures (i.e., the relational tables in their databases), or, more generally, in terms of a view over the sources. But how can we automatically publish both the content and the semantics of the dataset if its specification is given in terms of the data sources? We argue that we can achieve this goal by following what we call the
bottomup approach: the organization expresses its publishing requirement as a query over the sources, and, by using the ontology and the mapping, a suitable algorithm computes the corresponding query over the ontology. With such query at hand, we have reduced the problem in such a way that the topdown approach can now be followed, and the required data can be published according to the method described above. So, at the heart of the bottomup approach there is a conceptual issue to address:”Given a query over the sources, which is the query over the ontology that characterizes at best (independently from the current source database)?”
Note that the answer to this question is relevant also for other tasks related to the management of the information system, e.g., the task of explaining the semantics of the various data sources within the organization. The question implicitly refers to a sort of reverse engineering problem, which is a novel aspect in the investigation of both OBDM and data integration. Indeed, most of (if not all) the literature about managing data sources through an ontology (see, e.g., [18, 5]), or more generally, about data integration [15] assume that the user query is expressed over the global schema, and the goal is to find a rewriting (i.e., a query over the source schema) that captures the original query in the best way, independently from the current source database. Here, the problem is reversed, because we start with a source query and we aim at deriving a corresponding query over the ontology, called a sourcetotarget rewriting.
In this paper we study the above described bottomup approach, and provide the following contributions.

We introduce the concept of sourcetotarget rewriting (see Section 3), the main technical notion underlying the bottomup approach, and we describe two computation problems related to it, namely the recognition problem, and the finding problem. The former aims at checking whether a query over the ontology is a sourcetotarget rewriting of a given query over the sources, taking into account the mapping between the sources and the ontology. The latter aims at computing a suitable sourcetotarget rewriting of a given source query, with respect to the mapping.

We discuss two different semantics for sourcetotarget rewritings, one based on the logical models of the OBDM specification, and one based on certain answers. The former is somehow the natural choice, given the firstorder semantics behind OBDM. The latter is a significant alternative, that may better capture the intuition of a user who is accustomed to think of query semantics in terms of certain answers.

We show that, although the ideal notion is the one of “exact” sourcetotarget rewriting, it is important to resort to approximations to exact rewriting when exactness cannot be achieved. For this reason, we introduce the notion of sound and complete sourcetotarget rewritings.
2 Preliminaries
We assume familiarity with classical databases [1], Description Logics [4], and the OBDM paradigm. In this section, we (i) review the most basic notions of nonground instances, and their correlation with conjunctive queries; (ii) briefly discuss the chase of a possible nonground instance; (iii) discuss the relevant aspects of notation we use in the following regarding the OBDM paradigm.
For a possible nonground instance D, we assume that each value in , i.e., the set of values occurring in D, comes from the union of two fixed disjoint infinite sets: the set Const of all constants, and the set of all labeled nulls. We also let . In particular, each labeled null in a nonground instance is treated as an unknown value (and hence, an incomplete information), rather than to a nonexistent value [20]. Thus, a nonground instance represents a number of ground instances obtained by assigning constants to each labeled null. More precisely, let D be a nonground instance, and be a mapping . Then, is called a valuation of D, and we indicate, with , the ground instance obtained from D by replacing elsewhere each labeled null with . We also extend this to tuples, that is, given a tuple of both constants and labeled nulls, with we indicate the tuple , where if is a constant; otherwise ( is a labeled null), . Given an instance D it is possible to construct in linear time a boolean CQ that fully captures it, and vice versa. We also let denoting the transformation of by removing the existential quantification of the variables in . Moreover, given a nonboolean CQ (with as distinguished variables), we associate to it the instance by considering the variables in as if they were existentially quantified. For ease of presentation, we extend CQs to allow also queries of the form and , with their usual meaning. We also denote with the tuple composed by the terms in head of .
Given a source schema ; a target schema ; a set of sttgds (i.e., assertions of the form , where is a CQ over , and is a CQ over T); and a set of egds (i.e., assertions of the form , where is a CQ over T, and are among the variables in ), the chase procedure of a possibly nonground source instance D consists in: (i) the chase of D w.r.t. , where, for every sttgd in and for every pair of tuples such that , there is the introduction of new facts in the instance of the target schema so that holds, where consists in a fresh tuple of distinct labeled nulls coming from an infinite set disjoint from ; (ii) the chase of w.r.t. , where, for every egd and for every tuple such that and , we equate the two terms. Equating with means choosing one of the two so that the other is replaced elsewhere in by the one chosen. In particular, if one is a labeled null and the other is a constant, then the chase choose the constant; if both are labeled nulls, one coming from and the other from , it always choose the one coming from ; if both are constants, then the chase fail. Moreover, with we denote the set of equalities applied by the chase of w.r.t. a set of egds on variables coming from . This can be done by keeping track of the substitution applied by the chase. For example, if the chase equates the variable with the variable , and then equates the variable with the variable , and then with the constant , given the tuple , indicates the tuple . Note that, we can compute the certain answers of a boolean union of CQs (UCQ) with at most one inequality per disjunct by splitting as a boolean UCQ with exactly one inequality per disjunct, and a boolean UCQ with no inequality per disjunct. The key idea is that the negation of consists in a set of egds, hence, the certain answers of can be computed by applying the chase procedure over the instance (i.e., the instance produced by the chase of w.r.t. and ) w.r.t. , where, if the chase fail then the answer is ; otherwise, if the instance produced satisfy one of the conjunctive query in , then the answer is , else the answer is . We refer to [10] for more details.
Given an OBDM specification , where is a TBox, and is a set of sttgds, and given a nonground source instance D for , and a set of egds , we denote with , where , the ABox computed as follows: (i) chase the nonground source instance D w.r.t. ; (ii) freeze the instance (or equivalently, the ABox with variables) obtained, i.e., variables in this instance are now considered as constant. Note that, such ABox may also not exists due to the failing of the chase, in this case, we denote with the symbol .
For an OBDM specification , and for a source database for (i.e., a ground instance over the schema ), we denote by the set of models for relative to such that: (i) ; (ii) . Given a query over , we denote by the set of certain answers to in relative to . It is defined as: if ; otherwise, , where is the set of all possible tuples of constants in whose arity is the one of the query . Furthermore, given a [5] TBox and a ABox we are able to: (i) check whether is satisfiable by computing the answers of a suitable boolean query (a UCQ with at most one inequality per disjunct) over the ABox considered as a relational database. We see as the union of (the UCQ containing every disjunct not comprising inequalities in ) and (the UCQ containing every disjunct comprising inequalities in ); (ii) compute the certain answers to a UCQ over a satisfiable , denoted with , by producing a perfect reformulation (denoted as a function ) of such query, and then computing the answers of over the ABox considered as a relational database. See [6] for more details.
3 The notion of sourcetotarget rewriting
In what follows, we implicitly refer to (i) an OBDM specification ; (ii) a query over the source schema ; (iii) a query over the ontology .
As we said in the introduction, there are at least two different ways to formally define a sourcetotarget rewriting (stot rewriting in the following) for each of the three variants, namely “exact”, “complete”, and “sound”. The first one is captured by the following definition.
Definition 1
is a complete (resp., sound, exact) stot rewriting of with respect to under the modelbased semantics, if for each source database and for each model , we have that (resp., , ).
Intuitively, a complete stot rewriting of w.r.t. under the modelbased semantics is a query over that, when evaluated over a model for a source database , returns all the answers of the evaluation of over . In other words, for every source database , the query over captures all the semantics that expresses over . Similar arguments hold for the notions of sound and exact stot rewriting under this semantics. Moreover, from the formal definition of sourcetotarget rewriting and the usual definition of targettosource rewriting (simply called rewriting) used in data integration, it is easy to see that is a complete (resp., sound) sourcetotarget rewriting of w.r.t. under the modelbased semantics, if and only if is a sound (resp., complete) rewriting of w.r.t. , implying that, is an exact sourcetotarget rewriting of w.r.t. under the modelbased semantics, if and only if is an exact rewriting of w.r.t. .
The second possible way to formally define a sourcetotarget rewriting is as follows.
Definition 2
is a complete (resp., sound, exact) stot rewriting of with respect to under the certain answersbased semantics, if for each source database such that , we have that (resp., , ).
In this new semantics, in order to capture a query over , we resort to the notion of certain answers. Indeed, a complete stot rewriting of w.r.t. under the certain answersbased semantics is a query over such that, when we compute its certain answers for a source database , we get all the answers of the evaluation of over . As before, similar arguments hold for the notions of sound and exact stot rewriting under this semantics. Note also the strong correspondence between the exact stot rewriting under the certain answersbased semantics and the notion of perfect rewriting. We remind that a perfect rewriting of w.r.t. is a query over that computes for every source database such that [8]. Indeed, we have that is an exact stot rewriting of w.r.t. under the certain answersbased semantics if and only if is a perfect rewriting of w.r.t. . Note that the above observations imply that the two semantics are indeed different, since it is wellknown that the two notions of exact rewriting and perfect rewriting of w.r.t. are different. The difference between the two semantics is confirmed by the following example.
Example 1
(i.e., no TBox assertions in ); contains a binary relation and a unary relation ; ; ; .
It is easy to see that is a sound stot rewriting of w.r.t. under the certain answersbased semantics (more precisely, it is an exact stot rewriting of w.r.t. under such semantics), while it is not sound under the modelbased semantics. In fact, for the source database with and , and for the model with , we have , and . ∎
Intuitively, for the sound case, the modelbased semantics is too strong, in the sense that under such semantics, a model may contain not only facts depending on how data in the source are linked to through , but additionally arbitrary facts, with the only constraint of satisfying . One might think that, in order to address this issue, it is sufficient to resort to a sort of minimizations of the models of . Actually, the above example shows that, even if we restrict the set of models to the set of minimal models (i.e., models such that (i) and (ii) there is no model such that ), and adopt a semantics like the modelbased one but restricted to the set of minimal models, is still not a sound stot rewriting (this can be seen considering that the target database defined earlier is a minimal model).
Observe that the above considerations show the difference in the two semantics by referring to sound and exact stot rewritings. It is interesting to ask whether the difference shows up when restricting our attention to complete rewritings. The following proposition deals with this question.
Proposition 1
is a complete stot rewriting of with respect to under the modelbased semantics if and only if it is so under the certain answersbased semantics.
Proof (Sketch). One direction is trivial. Indeed, when is a complete stot rewriting of with respect to under the modelbased semantics, by definition of certain answers, for each source database such that we have that . For the other direction, suppose that is not a complete stot rewriting of w.r.t. under the modelbased semantics. It follows that, there exists a source database and a model such that , implying that, , which, in turn, implies that is not a complete stot rewriting of w.r.t. under the certain answersbased semantics. ∎
Obviously, the query over the ontology which captures at best a given query over the source schema is the exact stot rewriting of . However, the following example shows that even for very simple OBDM specifications, an exact stot rewriting of even trivial queries, may not exist.
Example 2
(i.e., no TBox assertions in ); contains two unary relations and ; ; .
It is possible to show that the only sound stot rewriting of w.r.t. under both semantics is the query , which is obviously not a complete stot rewriting of w.r.t. neither under the modelbased semantics, nor under the certain answersbased semantics. On the other hand, the most immediate and intuitive complete stot rewriting of w.r.t. is the query . Furthermore, as we will see in Section 5, this query is an “optimal” complete stot rewriting of w.r.t. , where the term optimal will be precisely defined. ∎
As we said in the introduction, in the rest of this paper we focus on complete stotrewritings. In particular, we will address both the recognition problem (see Section 4), and the finding problem (see Section 5) in a specific setting, characterized as follows:

The ontology in an OBDM specification is expressed as a TBox in .

The mapping in is a set of GLAV mapping assertions (or, sttgds), where each assertion expresses a correspondence between a conjunctive query over the source schema and a conjunctive query over the ontology.

In the recognition problem, both the query over the source schema and the query over the ontology are conjunctive queries. Similarly, in the finding problem, the query over the source schema is a conjunctive query.
4 The recognition problem for complete stot rewritings
We implicitly refer to the setting described at the end of the previous section. The recognition problem associated to the complete stot rewriting is the following decision problem: Given an OBDM specification , a query over the source schema , and a query over the ontology , check whether is a complete stot rewriting of with respect to . The next lemma is the starting point of our solution.
Lemma 1
is not a complete stot rewriting of with respect to if and only if there is a valuation of and a model such that .
Proof
”” Suppose that there exists a valuation of and a model such that . Obviously, . It follows that, there exist a source database , a model , and a tuple such that and .
”” Suppose that is not a complete stot rewriting of w.r.t. , i.e., there is a source database , a model , and a tuple such that and . The fact that implies the existence of a homomorphism such that . Note also, that since is a ground instance, is a valuation of such that . Obviously, , this can be seen by considering that (i) is true from the supposition that ; and (ii) is true by considering that, (which holds from the supposition that ), , and the queries in are monotone queries. It follows that, there is a valuation of and a model such that . ∎
Relying on the above lemma, we are now ready to present the algorithm CheckComplete for the recognition problem.
The next theorem establishes the correctness of the above algorithm.
Theorem 4.1
CheckComplete(, , ) terminates, and returns true if and only if is a complete stot rewriting of w.r.t. .
Proof (Sketch). Termination of the algorithm easily follows by the termination of the chase procedure, and by the obvious termination of computing the certain answers of a CQ over .
For the ”” direction, suppose that the algorithm returns false, i.e., , and . Now, if we extend by considering the freezing of this instance (i.e., variables are now considered as constants), it is easy to see that we obtain a valuation of D such that , and such that . Moreover, the fact that , implies, by the property of certain answers, that there is at least one model , and hence (because ) such that . It follows, from Lemma 1, that is not a complete stot rewriting of w.r.t. .
For the ”” direction, in the cases that or , it is easy to see that for every valuation of D, either the chase of will fail, or every ABox such that and will be such that , implying that, for every valuation of D, . It follows, from Lemma 1, that in this case is a complete stot rewriting of w.r.t. . While, in the cases that , it is easy to see that, for every valuation of D either , or if we compute , we have that . More generally, every obtained by chasing w.r.t. and , and then choosing arbitrary constants for the possible remaining variables, is such that . Hence, for every model such that , we have that . Also, we observe that the set of models coincides with the set of all models such that for all the possible ABox obtained using the above procedure. It follows that, for every possible valuation of D and for every possible , we have that , implying, from Lemma 1, that also in this case is a complete stot rewriting of w.r.t. . ∎
As for complexity issues of the algorithm, we observe: (i) it runs in PTime in the size of . Indeed, computing D (the instance associated to the query ) can be done in linear time, and chasing an instance in the presence of a weakly acyclic set of tgds (as in our case) is PTime in the size of D ( and are considered fixed); (ii) it runs in PTime in the size of . Indeed, and the evaluation of the certain answers of can be both computed in PTime in the size of ; (iii) it runs in ExpTime in the size of . This can be seen from the obvious ExpTime process of transferring data from D to ; (iv) the problem is NPcomplete in the size of because computing the certain answers of a UCQ query is NPcomplete in the size of the query (query complexity).
5 Finding optimal complete stot rewritings
In this section we study the problem of finding optimal complete stot rewritings. The first question to ask is which rewriting we chose in the case where several complete rewritings exist. The obvious choice is to define the notion of “optimal” complete stot rewriting: one such rewriting is optimal if there is no complete stot rewriting that is contained in . In order to formalize this notion, we introduce the following definitions (where denotes the set of models of ).
Definition 3
is contained in with respect to , denoted , if for every model we have that . is proper contained in with respect to , denoted , if and for at least one model we have that .
Definition 4
is an optimal complete stot rewriting of with respect to , if is a complete stot rewriting of with respect to , and there exists no query such that is a complete stot rewriting of with respect to and .
We are ready to present an algorithm for computing an optimal complete stot rewriting of a query over the source schema.
For the termination and the complexity of this algorithm hold the same considerations done for the termination and the complexity of the CheckComplete algorithm. In particular, FindOptimalComplete(,) terminates, and it runs in (i) PTime in the size of ; (ii) PTime in the size of ; (iii) ExpTime in the size of . Whereas, the correctness is established by the next theorem.
Theorem 5.1
FindOptimalComplete(, ) returns an optimal complete stot rewriting of w.r.t. .
Proof (Sketch). When the algorithm returns the query , it is easy to see that, regardless of which is the query , if we run the algorithm CheckComplete(,,) it returns true (also in this case, either the chase will fail, or the ABox produced will satisfy ), and hence, by Theorem 4.1, is a complete stot rewriting of w.r.t. . It follows that, also is a complete stot rewriting, and, by definition of such query, it is an optimal complete stot rewriting of w.r.t. .
When the algorithm returns the query (or , in the case ), if we run the algorithm CheckComplete(,,), it computes the ABox , where holds because corresponds exactly to (before to be freezed) extended with for all terms in not appearing in . It follows that, also in this case, CheckComplete(,,) returns true, implying, from Theorem 4.1, that is a complete stot rewriting of w.r.t. .
We now prove that the query (or , in the case ) is also an optimal complete stot rewriting of w.r.t. . In particular, suppose that there exist a query such that , i.e., , and there is a model and a tuple such that and . The fact that implies the existence of a valuation to all the variables in that makes true in . Note that, we can extend the valuation by assigning a new fresh constant to every variable appearing in D and not appearing in . The valuation obtained is now a valuation for D, and obviously . Moreover, if we apply the same valuation to the instance , it is easy to see that we obtain a ground instance such that (we recall that is the CQ associated to the instance ). Obviously, , and hence, holds because queries in the mapping are monotone queries. Moreover, we also have that (the fact that holds from the initial supposition). Hence, for the source database there is a model and a tuple such that and , implying that, is not a complete stot rewriting of w.r.t. . ∎
It is easy to prove that the query returned by the algorithm is not only an optimal complete stot rewriting of w.r.t. , but it is also the unique (up to equivalence) optimal complete stot rewriting of w.r.t. . Furthermore, the above result implies that an optimal complete stot rewriting of w.r.t. can always be expressed as a CQ.
6 Conclusion
We have introduced the notion of Ontologybased Open Data Publishing, whose idea is to use an OBDM specification as a basis for carrying out the task of publishing highquality open data.
In this paper, we have focused on the bottomup approach to ontologybased open data publishing, we have introduced the notion of sourcetotarget rewriting, and we have developed algorithms for two problems related to complete sourcetotarget rewritings, namely the recognition and the finding problem. We plan to continue our work on several directions. In particular, we plan to investigate the notion of sound rewriting under different semantics. Also, we want to study the topdown approach, especially with the goal of devising techniques for deriving which intensional knowledge to associate to datasets in order to document their content in a suitable way.
References
 [1] S. Abiteboul, R. Hull, and V. Vianu. Foundations of Databases. 1995.
 [2] F. N. Afrati and P. G. Kolaitis. Answering aggregate queries in data exchange. pages 129–138, 2008.
 [3] N. Antonioli, F. Castanò, S. Coletta, S. Grossi, D. Lembo, M. Lenzerini, A. Poggi, E. Virardi, and P. Castracane. Ontologybased data management for the italian public debt. pages 372–385, 2014.
 [4] F. Baader, D. Calvanese, D. McGuinness, D. Nardi, and P. F. PatelSchneider, editors. The Description Logic Handbook: Theory, Implementation and Applications. 2003.
 [5] D. Calvanese, G. De Giacomo, D. Lembo, M. Lenzerini, A. Poggi, M. RodríguezMuro, and R. Rosati. Ontologies and databases: The DLLite approach. volume 5689, pages 255–356. 2009.
 [6] D. Calvanese, G. De Giacomo, D. Lembo, M. Lenzerini, and R. Rosati. Tractable reasoning and efficient query answering in description logics: The DLLite family. 39(3):385–429, 2007.
 [7] D. Calvanese, G. De Giacomo, D. Lembo, M. Lenzerini, and R. Rosati. Pathbased identification constraints in description logics. pages 231–241, 2008.
 [8] D. Calvanese, G. De Giacomo, M. Lenzerini, and M. Y. Vardi. Viewbased query processing: On the relationship between rewriting, answering and losslessness. volume 3363, pages 321–336, 2005.
 [9] A. K. Chandra and P. M. Merlin. Optimal implementation of conjunctive queries in relational data bases. pages 77–90, 1977.
 [10] R. Fagin, P. G. Kolaitis, R. J. Miller, and L. Popa. Data exchange: Semantics and query answering. pages 207–224, 2003.
 [11] R. Fagin, P. G. Kolaitis, and L. Popa. Data exchange: Getting to the core. ACM Trans. Database Syst., 30(1):174–210, mar 2005.
 [12] A. Hernich. Answering nonmonotonic queries in relational data exchange. pages 143–154, 2010.
 [13] A. Hernich, L. Libkin, and N. Schweikardt. Closed world data exchange. ACM Trans. Database Syst., 36(2):14:1–14:40, 2011.
 [14] T. Imielinski and W. Lipski, Jr. Incomplete information in relational databases. J. ACM, 31(4):761–791, 1984.
 [15] M. Lenzerini. Data integration: A theoretical perspective. pages 233–246, 2002.
 [16] M. Lenzerini. Ontologybased data management. pages 5–6, 2011.
 [17] L. Libkin and C. Sirangelo. Data exchange and schema mappings in open and closed worlds. pages 139–148, 2008.
 [18] A. Poggi, D. Lembo, D. Calvanese, G. De Giacomo, M. Lenzerini, and R. Rosati. Linking data to ontologies. X:133–173, 2008.
 [19] M. Y. Vardi. The complexity of relational query languages. pages 137–146, 1982.
 [20] C. Zaniolo. Database relations with null values. In Proc. of PODS, pages 27–33, 1982.
Comments
There are no comments yet.