Experiences with Some Benchmarks for Deductive Databases and Implementations of Bottom-Up Evaluation

by   Stefan Brass, et al.

OpenRuleBench is a large benchmark suite for rule engines, which includes deductive databases. We previously proposed a translation of Datalog to C++ based on a method that "pushes" derived tuples immediately to places where they are used. In this paper, we report performance results of various implementation variants of this method compared to XSB, YAP and DLV. We study only a fraction of the OpenRuleBench problems, but we give a quite detailed analysis of each such task and the factors which influence performance. The results not only show the potential of our method and implementation approach, but could be valuable for anybody implementing systems which should be able to execute tasks of the discussed types.


page 1

page 2

page 3

page 4


Benchmarking Specialized Databases for High-frequency Data

This paper presents a benchmarking suite designed for the evaluation and...

The VLSAT-1 Benchmark Suite

This report presents VLSAT-1 (an acronym for "Very Large Boolean SATisfi...

Which Experiences Are Influential for Your Agent? Policy Iteration with Turn-over Dropout

In reinforcement learning (RL) with experience replay, experiences store...

Implementation of password manager with sram-based physical unclonable function

Hacking password databases is one of the most frequently reported cyber-...

Semantic Evaluation for Text-to-SQL with Distilled Test Suites

We propose test suite accuracy to approximate semantic accuracy for Text...

A new hybrid stemming algorithm for Persian

Stemming has been an influential part in Information retrieval and searc...

Computational Model to Quantify Object Innovativeness

The article considers the quantitative assessment approach to the innova...

Please sign up or login with your details

Forgot password? Click here to reset