You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: versioned_docs/version-3.9/scalardl-benchmarks/README.mdx
+14-13Lines changed: 14 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -168,6 +168,8 @@ You can run the benchmark several times by using the `--except-pre` option after
168
168
169
169
## Common parameters
170
170
171
+
The following parameters are common to all workloads.
172
+
171
173
### `concurrency`
172
174
173
175
-**Description:** Number of worker threads that concurrently execute benchmark transactions against the database. This parameter controls the level of parallelism during the actual benchmark execution phase. Increasing this value simulates more concurrent client accesses and higher workload intensity.
@@ -189,65 +191,64 @@ Select a workload to see its available parameters.
-**Description:** Number of bank accounts to create for the benchmark workload. This parameter determines the size of the dataset and affects the working-set size.
195
197
-**Default value:**`100000`
196
198
197
-
### `load_concurrency`
199
+
<h3>`load_concurrency`</h3>
198
200
199
201
-**Description:** Number of parallel threads used to load initial benchmark data into the database. This parameter controls how fast the data-loading phase completes. Increasing this value can significantly reduce data-loading time for large datasets. This is separate from the `concurrency` parameter used during benchmark execution.
200
202
-**Default value:**`1`
201
203
202
-
### `load_batch_size`
204
+
<h3>`load_batch_size`</h3>
203
205
204
206
-**Description:** Number of accounts to insert within a single transaction during the initial data-loading phase. Larger batch sizes can improve loading performance by reducing the number of transactions, but may increase the execution time of each transaction.
205
207
-**Default value:**`1`
206
208
</TabItem>
207
209
<TabItemvalue="TPC-C"label="TPC-C">
208
-
### `num_warehouses`
210
+
<h3>`num_warehouses`</h3>
209
211
210
212
-**Description:** Number of warehouses to create for the TPC-C benchmark workload. This value is the scale factor that determines the dataset size. Increasing this value creates a larger working set and enables various enterprise-scale testing.
211
213
-**Default value:**`1`
212
214
213
-
### `rate_payment`
215
+
<h3>`rate_payment`</h3>
214
216
215
217
-**Description:** Percentage of Payment transactions in the transaction mix, with the remainder being New-Order transactions. For example, a value of `50` means 50% of transactions will be Payment transactions and 50% will be New-Order transactions.
216
218
-**Default value:**`50`
217
219
218
-
### `load_concurrency`
220
+
<h3>`load_concurrency`</h3>
219
221
220
222
-**Description:** Number of parallel threads used to load initial benchmark data into the database. This parameter controls how fast the data-loading phase completes. Increasing this value can significantly reduce data-loading time, especially for larger numbers of warehouses. This is separate from the `concurrency` parameter used during benchmark execution.
221
223
-**Default value:**`1`
222
224
</TabItem>
223
225
<TabItemvalue="YCSB"label="YCSB">
224
-
### `record_count`
226
+
<h3>`record_count`</h3>
225
227
226
228
-**Description:** Number of records to create for the YCSB benchmark workload. This parameter determines the size of the dataset and affects the working-set size during benchmark execution.
227
229
-**Default value:**`1000`
228
230
229
-
### `payload_size`
231
+
<h3>`payload_size`</h3>
230
232
231
233
-**Description:** Size of the payload data (in bytes) for each record. This parameter controls the amount of data stored per record and affects database storage, memory usage, and I/O characteristics.
232
234
-**Default value:**`1000`
233
235
234
-
### `ops_per_tx`
236
+
<h3>`ops_per_tx`</h3>
235
237
236
238
-**Description:** Number of read or write operations to execute within a single transaction. This parameter affects transaction size and execution time. Higher values create longer-running transactions.
237
239
-**Default value:**`2`
238
240
239
-
### `workload`
241
+
<h3>`workload`</h3>
240
242
241
243
-**Description:** YCSB workload type that defines the operation mix: **A** (50% reads, 50% read-modify-write operations), **C** (100% reads), or **F** (100% read-modify-write operations). Note that the workload A in this benchmark uses read-modify-write operations instead of pure blind writes because ScalarDL prohibits the blind writes. Each workload type simulates different application access patterns.
242
-
243
244
-**Default value:**`A`
244
245
245
-
### `load_concurrency`
246
+
<h3>`load_concurrency`</h3>
246
247
247
248
-**Description:** Number of parallel threads used to load initial benchmark data into the database. This parameter controls how fast the data-loading phase completes. Increasing this value can significantly reduce data-loading time for large datasets. This is separate from the `concurrency` parameter used during benchmark execution.
248
249
-**Default value:**`1`
249
250
250
-
### `load_batch_size`
251
+
<h3>`load_batch_size`</h3>
251
252
252
253
-**Description:** Number of records to insert within a single transaction during the initial data-loading phase. Larger batch sizes can improve loading performance by reducing the number of transactions, but may increase the execution time of each transaction.
0 commit comments