Skip to content

Latest commit

 

History

History
145 lines (118 loc) · 8.4 KB

File metadata and controls

145 lines (118 loc) · 8.4 KB

CSB Runner

CSB runner (bm-runner) is a python command line application that takes a JSON config file as an input. The configuration file dictates to the runner: what applications to run, in which environment to run them, and what results to plot.

The subject benchmarks/applications can be either CSB builtin benchmarks (see bench), which are mostly auto-generated by bm-generator, or external benchmarks (see Running external benchmarks).

Setting up the environment

The following setup instructions were tested on openEuler v22.03 (LTS-SP4) using docker version 18.09.

To start, clone the framework source code and check out its dependency submodules:

$ git clone https://github.com/open-s4c/CSB.git
$ git submodule update --init --recursive

The framework requires a comparably recent Python version: at least 3.10 for typing features. Please make sure that it is available.

$ python --version || python3 --version

As OpenEuler 22.03 has only Python 3.9 in its repositories, it may be necessary to build a newer Python version from source.

Install other system-level dependencies

$ sudo dnf install cmake perf sysstat sudo gcc moby-engine moby-client hiredis-devel jq cargo glibc-all-langpacks

Please make sure that the user executing the benchmark has permissions to manage docker containers, as well as that docker can access the proxy, if necessary. After this, you should be able to run the benchmark suite.

Running Benchmarks

The entry point to all of the CSB framework functionality is the run.sh script. To start a new benchmarking run, launch run.sh without any arguments, and select one of the benchmarking configurations from the list.

You can choose bm_empty.json for example:

$ ./run.sh
[run.sh] benchmark environment already configured
[run.sh] Running benchmarks
The following benchmarks are available, select one to run
 1) ../config/bm_empty.json
 2) ../config/bm_mix_external.json
 3) ../config/bm_server_redis.json
 ...
#? 1

Behind the scenes, the CSB framework will automate the following actions:

  • setup the python virtualenv;
  • collect important system configuration parameters;
  • run the benchmarking workload;
  • plot the results.

Configuring benchmarking workload.

The unit of execution for CSB framework is a benchmark, which is configured with JSON files available in the config/ directory.

$ cd config
$ ls *.json -1
bm_empty.json
bm_mix_external.json
bm_server_redis.json
...

To better understand what the JSON configuration controls, refer to config.

Benchmarking Redis server-like workload.

Redis-like benchmark bench/targets/bm_server_redis.h is the only manually created benchmark in CSB. It calls all syscalls that redis-server calls, and simulates the behavior of redis server in that respect.

To benchmark redis-like server workload, one should adjust the configuration for a specific network environment.

This benchmark is based on a number of assumptions:

  • The System Under Test (SUT) server is running the CSB framework to perform the benchmarking.
  • The SUT is running redis-like instances in containers only, with dedicated Virtual Functions (VFs) of a NIC assigned to each redis-like container.
    • Note: the instructions below include instructions on how to configure VFs on openEuler.
  • There is a pool of free IP addresses in the subnet that can be assigned to the spawned containers.
  • There is a certain number of benchmarking servers (more than one) to run the instances of the load generator in the same subnet.
  • One of the benchmarking servers is dedicated to generating the workload for the "golden" container that is used as the measure of the system performance, while all other servers generate the load for all other containers. This assumption is necessary to isolate the effect of contention on non-dedicated benchmarking servers that arise when the total number of containers is high.

The benchmark relies on a few configurable files:

  • config/bm_server_redis.json to drive the SUT setup;
  • scripts/plugins/launch-clients.sh to start the load generators on a remote machine;
  • scripts/plugins/cleanup-clients.sh to stop the load generators on remote machines.
  • Optionally, you can copy these three files to new ones with a suffix identifying the environment, e.g. config/bm_server_redis_<mycase>.json, etc. for <mycase> environment. You can use config/bm_server_redis_*.json as a reference for configuring the benchmark for one networking configuration. In the following instructions, config/bm_server_redis_<mycase>.json, scripts/launch-clients_<mycase>.sh, scripts/cleanup-clients_<mycase>.sh identify the corresponding filenames after this copy (if not copy was done, please use the original filenames).

To run the benchmark, the following configurations changes are expected for benchmarking servers (e.g. client side configuration):

  • Make benchmarking servers available over ssh from the SUT without password and install redis-benchmark on each server. Example ~/.ssh/config file for the SUT:
    Host 10.10.10.2 # benchmarking server 1
       User root
       IdentityFile ~/.ssh/benchserver
    
    • On the server side, the public key of this identify file (e.g., ~/.ssh/benchserver.pub for ~/.ssh/benchserver) needs to be appended into ~/.ssh/authorized_keys. On openEuler, redis-benchmark is available as part of the redis package:
    $ sudo dnf install redis
    
  • Specify the IPs of the benchmarking servers in scripts/launch-clients_<mycase>.sh (OBSERVATION_CLIENT and CLIENT variables) and scripts/cleanup-clients_<mycase>.sh (CLIENT variable).

The following configuration changes are expected on the SUT for the redis-like server:

  • Make sure that /var/run/netns directory exists:
    $ sudo mkdir -m 777 -p /var/run/netns
    
  • Specify the NIC in the "nics" section of the config in the "nic_format" field, where a format argument {i} is expanded to the index of container, as well as in the benchmark_configmonitorssar_net field, for container with index 0 ("golden" container).
    • To set up VFs, run the command echo ${number_of_ips} | sudo tee /sys/class/net/$NIC/device/sriov_numvfs, where number_of_ips indicates the number of VFs (and IP addresses) to utilize.
  • Specify the IP address pool in both config/bm_server_redis_<mycase>.json in the nicsips list, and in the scripts/launch-clients_<mycase>.sh in the IPS variable.
  • In case NUMA and IRQ affinities are of importance, adjust nicscore_affinity_offsets list to set the affinity of the NIC IRQ, and containerscore_affinity_offsets to set the affinity of redis-like server. In this case, please also specify the affinities for the golden container in the benchmark_configmonitors field for mpstat and perf keys.
  • Adjust the network mask used in the containers in the nicsnetmask field.

After this, you can run the benchmark by selecting the corresponding item in run.sh menu. See Running Benchmarks

Running external benchmarks

In addition to the builtin/auto-generated benchmarks shipped with CSB, CSB allows users to add external benchmarks under the following conditions:

  • Users are expected to add an adapter that maps the output of the external benchmark to CSB format (see adapters config).
  • Users should provide a relative path to the CSB project directory where the external benchmarks are located (see applications config).
  • If the external benchmark is installed on the host (in /usr/bin), it can be run in the container, provided the container's OS matches the host's OS.

CSB contains minimal examples for some external benchmarks like fio, stress-ng, unixbench, and will-it-scale. Each of these benchmarks have a JSON file under config/ and an adapter under scripts/adapters. For unixbench and will-it-scale we recommend users to clone these repos under a folder called bm-external inside CSB directory.