Stream WatDiv - A Streaming RDF Benchmark

Loading...
Thumbnail Image

Date

2018-01-18

Authors

Gao, Libo

Advisor

Özsu, Tamer
Golab, Lukasz

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Modern applications are required to process stream data which are semantically tagged. Sometimes static background data interlinked with stream data are also needed to answer the query. To meet these requirements, streaming RDF processing (SRP) engines emerged in recent years. Although most SRP engines adopt the same streaming RDF data model in which a streaming RDF triple is an RDF triple annotated with a timestamp, there is no standard query language, which means every engine has their own language syntax. In addition, these engines are quite primitive, different engines support limited and different query operation sets. What's more, they are fragile in face of complex query, high stream rate or large static dataset. This poses a lot of challenges to evaluate the SRP engines. In our work, we show that previous streaming RDF benchmarks do not have a sufficient workload to understand engine's performance. The queries in those workloads are either not executable on existing engines, or very limited in terms of number. The goal of this work is to propose a benchmark which provides diversified datasets and workloads. In our work, we extend WatDiv to generate streaming data and streaming query, and propose a new streaming RDF benchmark, called Stream WatDiv. WatDiv is an RDF benchmark designed for diversified stress testing of RDF data management engines. It introduces a collection of query features, which is used to assess the diversity of dataset and workloads. Through proper data schema design and query generation, WatDiv shows a good coverage of values of these query features. We demonstrate the feasibility of applying the same idea in streaming RDF domain. Stream WatDiv benchmark suits contain a data generator to generate scalable streaming data and static data, a query generator to generate scalable workloads, and a testbed to monitor the engine's output. We evaluate two engines, C-SPARQL and CQELS, and measure the correctness of engine output, latency and memory consumption. The findings contain two parts. First, we validate the result related to these two engines in previous works. (1) CQELS is more robust and efficient than C-SPARQL at processing streaming RDF queries in most cases. (2) increasing streaming rate and integrating static data will significantly degrade C-SPARQL's performance, while CQELS is marginally affected. (3) C-SPARQL is more memory-efficient than CQELS. Second, the diversity of Stream WatDiv workloads helps detect engines' issues that are not captured before. Queries can be grouped into different types based on the query features. These types of queries can be used to evaluate a specific engine features. (1) Triple pattern count of a query influences C-SPARQL's performance. (2) Both C-SPARQL and CQELS show a significant latency increase when the query has larger result cardinality. (3) Neither of these two engines are biased toward processing linear, star or snowflake queries. (4) CQELS is more efficient at handling queries with variously selective triple patterns, while C-SPARQL performs better for queries with equally selective triple patterns than queries with variously selective triple patterns.

Description

Keywords

Streaming RDF benchmark

LC Subject Headings

Citation