CSB runner (bm-runner) is a python command line application that takes a JSON config file as an input. The configuration file dictates to the runner: what applications to run, in which environment to run them, and what results to plot.
The subject benchmarks/applications can be either CSB builtin benchmarks (see bench), which are mostly auto-generated by bm-generator, or external benchmarks (see Running external benchmarks).
The following setup instructions were tested on openEuler v22.03 (LTS-SP4) using docker version 18.09.
To start, clone the framework source code and check out its dependency submodules:
$ git clone https://github.com/open-s4c/CSB.git
$ git submodule update --init --recursive
The framework requires a comparably recent Python version: at least 3.10 for typing features. Please make sure that it is available.
$ python --version || python3 --version
As OpenEuler 22.03 has only Python 3.9 in its repositories, it may be necessary to build a newer Python version from source.
Install other system-level dependencies
$ sudo dnf install cmake perf sysstat sudo gcc moby-engine moby-client hiredis-devel jq cargo glibc-all-langpacks
Please make sure that the user executing the benchmark has permissions to manage docker containers, as well as that docker can access the proxy, if necessary. After this, you should be able to run the benchmark suite.
The entry point to all of the CSB framework functionality is the run.sh script.
To start a new benchmarking run, launch run.sh without any arguments, and select one of the benchmarking configurations from the list.
You can choose bm_empty.json for example:
$ ./run.sh
[run.sh] benchmark environment already configured
[run.sh] Running benchmarks
The following benchmarks are available, select one to run
1) ../config/bm_empty.json
2) ../config/bm_mix_external.json
3) ../config/bm_server_redis.json
...
#? 1
Behind the scenes, the CSB framework will automate the following actions:
- setup the python virtualenv;
- collect important system configuration parameters;
- run the benchmarking workload;
- plot the results.
The unit of execution for CSB framework is a benchmark, which is configured with JSON files available in the config/ directory.
$ cd config
$ ls *.json -1
bm_empty.json
bm_mix_external.json
bm_server_redis.json
...
To better understand what the JSON configuration controls, refer to config.
Redis-like benchmark bench/targets/bm_server_redis.h is the only manually created benchmark in CSB.
It calls all syscalls that redis-server calls, and simulates the behavior of redis server in that respect.
To benchmark redis-like server workload, one should adjust the configuration for a specific network environment.
This benchmark is based on a number of assumptions:
- The System Under Test (SUT) server is running the CSB framework to perform the benchmarking.
- The SUT is running redis-like instances in containers only, with dedicated Virtual Functions (VFs) of a NIC assigned to each redis-like container.
- Note: the instructions below include instructions on how to configure VFs on openEuler.
- There is a pool of free IP addresses in the subnet that can be assigned to the spawned containers.
- There is a certain number of benchmarking servers (more than one) to run the instances of the load generator in the same subnet.
- One of the benchmarking servers is dedicated to generating the workload for the "golden" container that is used as the measure of the system performance, while all other servers generate the load for all other containers. This assumption is necessary to isolate the effect of contention on non-dedicated benchmarking servers that arise when the total number of containers is high.
The benchmark relies on a few configurable files:
config/bm_server_redis.jsonto drive the SUT setup;scripts/plugins/launch-clients.shto start the load generators on a remote machine;scripts/plugins/cleanup-clients.shto stop the load generators on remote machines.- Optionally, you can copy these three files to new ones with a suffix identifying the environment, e.g.
config/bm_server_redis_<mycase>.json, etc. for<mycase>environment. You can useconfig/bm_server_redis_*.jsonas a reference for configuring the benchmark for one networking configuration. In the following instructions,config/bm_server_redis_<mycase>.json,scripts/launch-clients_<mycase>.sh,scripts/cleanup-clients_<mycase>.shidentify the corresponding filenames after this copy (if not copy was done, please use the original filenames).
To run the benchmark, the following configurations changes are expected for benchmarking servers (e.g. client side configuration):
- Make benchmarking servers available over ssh from the SUT without password and install
redis-benchmarkon each server. Example~/.ssh/configfile for the SUT:Host 10.10.10.2 # benchmarking server 1 User root IdentityFile ~/.ssh/benchserver- On the server side, the public key of this identify file (e.g.,
~/.ssh/benchserver.pubfor~/.ssh/benchserver) needs to be appended into~/.ssh/authorized_keys. On openEuler,redis-benchmarkis available as part of theredispackage:
$ sudo dnf install redis - On the server side, the public key of this identify file (e.g.,
- Specify the IPs of the benchmarking servers in
scripts/launch-clients_<mycase>.sh(OBSERVATION_CLIENTandCLIENTvariables) andscripts/cleanup-clients_<mycase>.sh(CLIENTvariable).
The following configuration changes are expected on the SUT for the redis-like server:
- Make sure that
/var/run/netnsdirectory exists:$ sudo mkdir -m 777 -p /var/run/netns - Specify the NIC in the
"nics"section of the config in the"nic_format"field, where a format argument{i}is expanded to the index of container, as well as in thebenchmark_config→monitors→sar_netfield, for container with index 0 ("golden" container).- To set up VFs, run the command
echo ${number_of_ips} | sudo tee /sys/class/net/$NIC/device/sriov_numvfs, wherenumber_of_ipsindicates the number of VFs (and IP addresses) to utilize.
- To set up VFs, run the command
- Specify the IP address pool in both
config/bm_server_redis_<mycase>.jsonin thenics→ipslist, and in thescripts/launch-clients_<mycase>.shin theIPSvariable. - In case NUMA and IRQ affinities are of importance, adjust
nics→core_affinity_offsetslist to set the affinity of the NIC IRQ, andcontainers→core_affinity_offsetsto set the affinity of redis-like server. In this case, please also specify the affinities for the golden container in thebenchmark_config→monitorsfield formpstatandperfkeys. - Adjust the network mask used in the containers in the
nics→netmaskfield.
After this, you can run the benchmark by selecting the corresponding item in run.sh menu.
See Running Benchmarks
In addition to the builtin/auto-generated benchmarks shipped with CSB, CSB allows users to add external benchmarks under the following conditions:
- Users are expected to add an adapter that maps the output of the external benchmark to CSB format (see adapters config).
- Users should provide a relative path to the CSB project directory where the external benchmarks are located (see applications config).
- If the external benchmark is installed on the host (in /usr/bin), it can be run in the container, provided the container's OS matches the host's OS.
CSB contains minimal examples for some external benchmarks like fio, stress-ng, unixbench, and will-it-scale.
Each of these benchmarks have a JSON file under config/ and an adapter under scripts/adapters.
For unixbench and will-it-scale we recommend users to clone these repos under a folder called bm-external inside
CSB directory.