Log In Sign Up

Visual Analysis of Ontology Matching Results with the MELT Dashboard

In this demo, we introduce MELT Dashboard, an interactive Web user interface for ontology alignment evaluation which is created with the existing Matching EvaLuation Toolkit (MELT). Compared to existing, static evaluation interfaces in the ontology matching domain, our dashboard allows for interactive self-service analyses such as a drill down into the matcher performance for data type properties or into the performance of matchers within a certain confidence threshold. In addition, the dashboard offers detailed group evaluation capabilities that allow for the application in broad evaluation campaigns such as the Ontology Alignment Evaluation Initiative (OAEI).


Supervised Ontology and Instance Matching with MELT

In this paper, we present MELT-ML, a machine learning extension to the M...

A Markov Model for Ontology Alignment

The explosion of available data along with the need to integrate and uti...

Evaluating Ontology Matching Systems on Large, Multilingual and Real-world Test Cases

In the field of ontology matching, the most systematic evaluation of mat...

Testing the AgreementMaker System in the Anatomy Task of OAEI 2012

The AgreementMaker system was the leading system in the anatomy task of ...

Breaking-down the Ontology Alignment Task with a Lexical Index and Neural Embeddings

Large ontologies still pose serious challenges to state-of-the-art ontol...

Ontology Matching Techniques: A Gold Standard Model

Typically an ontology matching technique is a combination of much differ...

SANOM Results for OAEI 2019

Simulated annealing-based ontology matching (SANOM) participates for the...

1 Introduction

The Matching EvaLuation Toolkit (MELT)111 [6] is an open (MIT-licensed) Java framework for ontology matcher development, tuning, evaluation, and packaging, which integrates well into the existing ontology alignment evaluation infrastructure used by the community, i.e. SEALS222 [3, 11] and HOBBIT333 [8]. While those frameworks offer programmatic tooling to evaluate ontology matching systems, advanced analyses have to be specifically implemented. Similarly, alignment results are typically presented in the form of static tables which do not allow to explore the actual data.

2 Related Work

The Alignment API [1] is the most well-known ontology matching framework. It allows to develop and evaluate ontology matchers and to render matching results, for example as a LaTeX figure. The Semantic Evaluation at Large Scale (SEALS) framework allows to package matching systems and also provides an evaluation runtime which is capable of calculating precision, recall, and . The more recent Holistic Benchmarking of Big Linked Data (HOBBIT) runtime works in a similar fashion. In terms of visualization, Alignment Cubes [7] allow for a fine grained, interactive visual exploration of alignments. Another framework for working with alignment files is VOAR [10] which is a Web-based system where users can upload ontologies and alignments which are then rendered.

Compared to existing work, MELT Dashboard is the first interactive Web UI for analyzing and comparing multiple matcher evaluation results. The dashboard is particularly helpful for exploring correct and wrong correspondences of matching systems and is, therefore, also suitable for matcher development and debugging.

3 Architecture

The dashboard can be used for matchers that were developed in MELT but also allows for the evaluation of external matchers that use the well-known alignment format of the Alignment API. It is implemented in Java and is included by default in the MELT 2.0 release which is available through the maven central repository444 The DashboardBuilder class is used to generate an HTML page. Without further parameters, a default page can be generated that allows for an in-depth analysis. Alternatively, the dashboard builder allows to completely customize a dashboard before generation – for instance by adding or deleting selection controls and display panes. After the generation, the self-contained Web page can be viewed locally in the Web browser or be hosted on a server. The page visualization is implemented with dc.js555, a JavaScript charting library with crossfilter666 support. Once generated, the dashboard can be used also by non-technical users to analyze and compare matcher results.

As matching tasks (and the resulting alignment files) can become very large, the dashboard was developed with a focus on performance. For the OAEI 2019 KnowledgeGraph track [4, 5], for instance, more than 200,000 correspondences are rendered and results are recalculated on the fly when the user performs a drill-down selection.

4 Use Case and Demonstration

One use case for the framework are OAEI campaigns. The Ontology Alignment Evaluation Initiative is running evaluation campaigns [2] every year since 2005. Researchers submit generic matching systems for predefined tasks (so called tracks) and the track organizers post the results of the systems on each track. The results are typically communicated on the OAEI Web page in a static fashion through one or more tables.777For an example, see the Anatomy Track results page 2019:

In order to demonstrate the capabilities of the dashboard, we generated pages for the following tracks: Anatomy, Conference, and KnowledgeGraph. We included the first two tracks in one dashboard888Demo link: to show the multi-track capabilities of the toolkit. The KnowledgeGraph dashboard999Demo link: was officially used in the OAEI 2019 campaign and shows that the dashboard can handle also combined schema and instance matching tasks at scale. The code to generate the dashboards is available in the example folder of the MELT project.101010 It can be seen that few lines of code are necessary to generate comprehensive evaluation pages.

An annotated screenshot of the controls for the Anatomy/Conference dashboard is depicted in Figure 1. Each numbered element is clickable in order to allow for a sub-selection. For example, in element

, the Conference track has been selected and all elements in the dashboard show the results for this subselection. The controls in the given sample dashboard are as follows:

selection of the track,

selection of the track/test case (the Conference track is selected with all test cases),

confidence interval of the matchers (an interval of is selected),

relation (only equivalence for this track),

matching systems,

the share of true/false positives (TP/FP) and false negatives (FN),


the type of the left/right element in each correspondence (e.g., class, object property, datatype property),

the share of residual true positives (i.e., non-trivial correspondences generated by a configurable baseline matcher),

the total number of correspondences found per test case – the performance result of each match (TP/FP/FN) is color coded, and

the color-coded correspondences found per matcher.

Below the controls, the default dashboard shows the performance results per matcher, i.e. micro and macro averages of precision (P), recall (R), and F-score (

) in a table as well as concrete correspondences in a further table (both are not shown in Figure 1). The data and all controls are updated automatically when a selection is performed. For example, if the Anatomy track is selected (control

) for matcher Wiktionary [9] (control

), and only false negative correspondences (control

) are desired, the correspondence table will show examples of false negative matches for the Wiktionary matching system on the Anatomy track.

Figure 1: Dashboard for the OAEI Anatomy/Conference Tracks. The numbered controls are clickable to drill down into the data. If clicked, all elements change automatically to reflect the current selection.

5 Conclusion and Future Work

In this paper, we presented the MELT Dashboard, an interactive Web user interface for ontology alignment evaluation. The tool allows to generate dashboards easily and to use them for a detailed evaluation in a drill-down fashion. With the new functionality, we hope to increase the transparency and the understanding of matching systems in the ontology alignment community and to make in-depth evaluation capabilities available to a broader audience without the need of installing any software. The first usage in the OAEI 2019 campaign showed that the dashboard can be used for broad evaluation campaigns of multiple matchers on multiple matching tasks. In the future, we plan to extend the interface with further controls, to make it more visually appealing, and to grow its adoption.


  • [1] J. David, J. Euzenat, F. Scharffe, and C. T. dos Santos (2011) The alignment API 4.0. Semantic Web 2 (1), pp. 3–10. Cited by: §2.
  • [2] J. Euzenat, C. Meilicke, H. Stuckenschmidt, P. Shvaiko, and C. T. dos Santos (2011) Ontology alignment evaluation initiative: six years of experience. J. Data Semantics 15, pp. 158–192. Cited by: §4.
  • [3] R. García-Castro, M. Esteban-Gutiérrez, and A. Gómez-Pérez (2010) Towards an infrastructure for the evaluation of semantic technologies. In eChallenges e-2010 Conference, pp. 1–7. Cited by: §1.
  • [4] S. Hertling and H. Paulheim (2018)

    DBkWik: a consolidated knowledge graph from thousands of wikis

    In IEEE International Conference on Big Knowledge, ICBK, pp. 17–24. Cited by: §3.
  • [5] S. Hertling and H. Paulheim (2020) The knowledge graph track at OAEI - gold standards, baselines, and the golden hammer bias. In The Semantic Web - 17th International Conference, ESWC, Note: [to appear] Cited by: §3.
  • [6] S. Hertling, J. Portisch, and H. Paulheim (2019) MELT - matching evaluation toolkit. In Semantic Systems. The Power of AI and Knowledge Graphs - 15th International Conference, SEMANTiCS, pp. 231–245. Cited by: §1.
  • [7] V. Ivanova, B. Bach, E. Pietriga, and P. Lambrix (2017) Alignment cubes: towards interactive visual exploration and evaluation of multiple ontology alignments. In International Semantic Web Conference (ISWC), pp. 400–417. Cited by: §2.
  • [8] A. N. Ngomo and M. Röder (2016) HOBBIT: holistic benchmarking for big linked data. ERCIM News 105. Cited by: §1.
  • [9] J. Portisch, M. Hladik, and H. Paulheim (2019) Wiktionary matcher. In 14th International Workshop on Ontology Matching co-located with the 18th International Semantic Web Conference (ISWC), pp. 181–188. Cited by: §4.
  • [10] B. Severo, C. Trojahn, and R. Vieira (2017) VOAR 3.0 : a configurable environment for manipulating multiple ontology alignments. In International Semantic Web Conference (Posters, Demos & Industry Tracks), CEUR Workshop Proceedings, Vol. 1963. Cited by: §2.
  • [11] S. N. Wrigley, R. García-Castro, and L. Nixon (2012) Semantic evaluation at large scale (SEALS). In 21st international conference companion on World Wide Web - WWW, pp. 299–302. Cited by: §1.