Skip to content

ldbc/ldbc_snb_interactive_v2_impls

Repository files navigation

LDBC logo

LDBC SNB Interactive v2 workload implementations

Build Status

This repository contains reference implementations of the LDBC Social Network Benchmark's Interactive v2 workload. The design and implementation of the workload are described in the TPCTC 2023 paper, "The LDBC Social Network Benchmark Interactive Workload v2: A Transactional Graph Query Benchmark with Deep Delete Operations" by Püroja et al.

To get started with the LDBC SNB benchmarks, check out our introductory presentation: The LDBC Social Network Benchmark (PDF).

Notes

⚠️ Audited runs are currently only possible with the SNB Interactive v1.x version. The new version of Interactive (with deletes and larger SFs) will be released in 2024.

⚠️ Please keep in mind the following when using this repository.

  • The goal of the implementations in this repository is to serve as reference implementations which other implementations can cross-validated against. Therefore, our primary objective was readability and not absolute performance when formulating the queries.

  • The default workload contains updates which change the state of the database. Therefore, the database needs to be reloaded or restored from backup before each run. Use the provided scripts/backup-database.sh and scripts/restore-database.sh scripts to achieve this.

Implementations

We provide two reference implementations:

Additional implementations:

For detailed instructions, consult the READMEs of the projects.

User's guide

Building the project

This project uses Java 17.

To build the entire project, run:

scripts/build.sh

To build a subset of the projects, e.g. to build the PostgreSQL implementation, run its individual build script:

postgres/scripts/build.sh

Inputs

The benchmark framework relies on the following inputs produced by the SNB Datagen's new (Spark) version.

Currently, the initial data set, update streams, and parameters can be generated with the following command:

export SF= #The scale factor to generate
export LDBC_SNB_DATAGEN_DIR= # Path to the LDBC SNB datagen directory
export LDBC_SNB_DATAGEN_MAX_MEM= #Maximum memory the datagen could use, e.g. 16G
export LDBC_SNB_DRIVER_DIR= # Path to the LDBC SNB driver directory
export DATA_INPUT_TYPE=parquet
# If using the Docker Datagen version, set the env variable:
export USE_DATAGEN_DOCKER=true

scripts/generate-all.sh

Pre-generated data sets

Pre-generated SF1-SF300 data sets are available.

Loading the data

Select the system to be tested, e.g. PostgreSQL. Load the data set as described in the README file of the selected system. For most systems, this involves setting an environment variable to the correct location and invoking the scripts/load-in-one-step.sh script.

Driver modes

For each implementation, it is possible to perform the run in one of the SNB driver's three modes. All of these runs should be started with the initial data set loaded to the database.

  1. Create validation parameters with the driver/create-validation-parameters.sh script.

    • Inputs:
      • The query substitution parameters are taken from the directory set in ldbc.snb.interactive.parameters_dir configuration property.
      • The update streams are the files from the inserts and deletes directories in the directory ldbc.snb.interactive.updates_dir configuration property.
      • For this mode, the query frequencies are set to a uniform 1 value to ensure the best average test coverage. [TODO]
    • Output: The results will be stored in the validation parameters file (e.g. validation_params.json) file set in the validate_database configuration property.
    • Parallelism: The execution must be single-threaded to ensure a deterministic order of operations.
  2. Validate against existing validation parameters with the driver/validate.sh script.

    • Input:
      • The query substitution parameters are taken from the validation parameters file (e.g. validation_params.json) file set in the validate_database configuration property.
      • The update operations are also based on the content of the validation parameters file.
    • Output:
      • The validation either passes of fails.
      • The per query results of the validation are printed to the console.
      • If the validation failed, the results are saved to the validation_params-failed-expected.json and validation_params-failed-actual.json files.
    • Parallelism: The execution must be single-threaded to ensure a deterministic order of operations.
  3. Run the benchmark with the driver/benchmark.sh script.

    • Inputs:
      • The query substitution parameters are taken from the directory set in ldbc.snb.interactive.parameters_dir configuration property.
      • The update streams are the files from the inserts and deletes directories in the directory ldbc.snb.interactive.updates_dir configuration property.
      • The goal of the benchmark is to achieve the best (lowest possible) time_compression_ratio value while ensuring that the 95% on-time requirement is kept (i.e. 95% of the queries can be started within 1 second of their scheduled time). If your benchmark run returns "failed schedule audit", increase this number (which lowers the time compression rate) until it passes.
      • Set the thread_count property to the size of the thread pool for read operations.
      • For audited benchmarks, ensure that the warmup and operation_count properties are set so that the warmup and benchmark phases last for 30+ minutes and 2+ hours, respectively.
    • Output:
      • Passed or failed the "schedule audit" (the 95% on-time requirement).
      • The throughput achieved in the run (operations/second).
      • The detailed results of the benchmark are printed to the console and saved in the results/ directory.
    • Parallelism: Multi-threaded execution is recommended to achieve the best result.

Developer's guide

To create a new implementation, it is recommended to use one of the existing ones: the Neo4j implementation for graph database management systems and the PostgreSQL implementation for RDBMSs.

The implementation process looks roughly as follows:

  1. Create a bulk loader which loads the initial data set to the database.
  2. Add the required glue code to the Java driver that allows parameterized execution of queries and operators.
  3. Implement the complex and short reads queries (21 in total).
  4. Implement the insert and delete operations (16 in total).
  5. Test the implementation against the reference implementations using various scale factors.
  6. Optimize the implementation.

Audited runs

Implementations of the Interactive workload can be audited by a certified LDBC auditor. The Auditing Policies chapter of the specification describes the auditing process and the required artifacts.

If you plan to get your system audited, please reach out to the LDBC Board of Directors.