@@ -13,31 +13,34 @@ To get started, let's examine a simple example of a multiplatform benchmark:
1313``` kotlin
1414import kotlinx.benchmark.*
1515
16- @BenchmarkMode(Mode .Throughput )
17- @OutputTimeUnit(TimeUnit .MILLISECONDS )
18- @Warmup(iterations = 20 , time = 1 , timeUnit = TimeUnit .SECONDS )
19- @Measurement(iterations = 20 , time = 1 , timeUnit = TimeUnit .SECONDS )
20- @BenchmarkTimeUnit(TimeUnit .MILLISECONDS )
16+ @BenchmarkMode(Mode .AverageTime )
17+ @OutputTimeUnit(BenchmarkTimeUnit .MILLISECONDS )
18+ @Warmup(iterations = 10 , time = 500 , timeUnit = BenchmarkTimeUnit .MILLISECONDS )
19+ @Measurement(iterations = 20 , time = 1 , timeUnit = BenchmarkTimeUnit .SECONDS )
2120@State(Scope .Benchmark )
2221class ExampleBenchmark {
2322
23+ // Parameterizes the benchmark to run with different list sizes
2424 @Param(" 4" , " 10" )
2525 var size: Int = 0
2626
2727 private val list = ArrayList <Int >()
2828
29+ // Prepares the test environment before each benchmark run
2930 @Setup
3031 fun prepare () {
3132 for (i in 0 .. < size) {
3233 list.add(i)
3334 }
3435 }
3536
37+ // Cleans up resources after each benchmark run
3638 @TearDown
3739 fun cleanup () {
3840 list.clear()
3941 }
4042
43+ // The actual benchmark method
4144 @Benchmark
4245 fun benchmarkMethod (): Int {
4346 return list.sum()
@@ -55,29 +58,30 @@ The following annotations are available to define and fine-tune your benchmarks.
5558
5659### @State
5760
58- The ` @State ` annotation is used to mark benchmark classes.
59- In Kotlin/JVM, however, benchmark classes are not required to be annotated with ` @State ` .
61+ The ` @State ` annotation specifies the extent to which the state object is shared among the worker threads,
62+ and it is mandatory for benchmark classes to be marked with this annotation to define their scope of state sharing .
6063
61- In Kotlin/JVM, the annotation specifies the extent to which the state object is shared among the worker threads, e.g, ` @State(Scope.Group) ` .
64+ Currently, multi-threaded execution of a benchmark method is supported only on the JVM, where you can specify various scopes .
6265Refer to [ JMH documentation of Scope] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Scope.html )
63- for details about available scopes. Multi-threaded execution of a benchmark method is not supported in other Kotlin targets,
64- thus only ` Scope.Benchmark ` is available .
66+ for details about available scopes and their implications.
67+ In non-JVM targets, only ` Scope.Benchmark ` is applicable .
6568
69+ When writing JVM-only benchmarks, benchmark classes are not required to be annotated with ` @State ` .
6670Refer to [ JMH documentation of @State ] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/State.html )
6771for details about the effect and restrictions of the annotation in Kotlin/JVM.
6872
69- In our snippet, the ` ExampleBenchmark ` class is marked with ` @State(Scope.Benchmark) ` ,
70- indicating that the performance of benchmark methods in this class should be measured .
73+ In our snippet, the ` ExampleBenchmark ` class is annotated with ` @State(Scope.Benchmark) ` ,
74+ indicating the state is shared across all worker threads .
7175
7276### @Setup
7377
74- The ` @Setup ` annotation is used to mark a method that sets up the necessary preconditions for your benchmark test.
75- It serves as a preparatory step where you set up the environment for the benchmark.
78+ The ` @Setup ` annotation marks a method that sets up the necessary preconditions for your benchmark test.
79+ It serves as a preparatory step where you initiate the benchmark environment .
7680
77- In Kotlin/JVM, you can specify when the setup method should be executed, e.g, ` @Setup(Level.Iteration) ` .
81+ The setup method is executed once before the entire set of iterations for a benchmark method begins.
82+ In Kotlin/JVM, you can specify when the setup method should be executed, e.g., ` @Setup(Level.Iteration) ` .
7883Refer to [ JMH documentation of Level] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Level.html )
79- for details about available levels. In other targets, it operates always on the ` Trial ` level, meaning the setup method is
80- executed once before the entire set of benchmark method iterations.
84+ for details about available levels in Kotlin/JVM.
8185
8286The key point to remember is that the ` @Setup ` method's execution time is not included in the final benchmark
8387results - the timer starts only when the ` @Benchmark ` method begins. This makes ` @Setup ` an ideal place
@@ -90,28 +94,28 @@ In the provided example, the `@Setup` annotation is used to populate an `ArrayLi
9094
9195### @TearDown
9296
93- The ` @TearDown ` annotation is used to denote a method that's executed after the benchmarking method(s) .
94- This method is typically responsible for cleaning up or deallocating any resources or conditions that were initialized in the ` @Setup ` method.
97+ The ` @TearDown ` annotation is used to denote a method that resets and cleans up the benchmarking environment .
98+ It is chiefly responsible for the cleanup or deallocation of resources and conditions set up in the ` @Setup ` method.
9599
96- In Kotlin/JVM, you can specify when the teardown method should be executed, e.g, ` @TearDown(Level.Iteration) ` .
100+ The teardown method is executed once after the entire iteration set of a benchmark method.
101+ In Kotlin/JVM, you can specify when the teardown method should be executed, e.g., ` @TearDown(Level.Iteration) ` .
97102Refer to [ JMH documentation of Level] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Level.html )
98- for details about available levels. In other targets, it operates always on ` Trial ` level, meaning the teardown method
99- is executed once after the entire set of benchmark method iterations.
103+ for details about available levels in Kotlin/JVM.
100104
101- The ` @TearDown ` annotation helps you avoid performance bias and ensures the proper maintenance of resources and the
102- preparation of a clean environment for the next run. As with the ` @Setup ` method, the ` @TearDown ` method's
103- execution time is not included in the final benchmark results.
105+ The ` @TearDown ` annotation is crucial for avoiding performance bias, ensuring the proper maintenance of resources,
106+ and preparing a clean environment for the next run. Similar to the ` @Setup ` method, the execution time of the
107+ ` @TearDown ` method is not included in the final benchmark results.
104108
105109Refer to [ JMH documentation of @TearDown ] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/TearDown.html )
106- for details about the effect and restrictions of the annotation in Kotlin/JVM.
110+ for more information on the effect and restrictions of the annotation in Kotlin/JVM.
107111
108112In our example, the ` cleanup ` function annotated with ` @TearDown ` is used to clear our ` ArrayList ` .
109113
110114### @Benchmark
111115
112116The ` @Benchmark ` annotation is used to specify the methods that you want to measure the performance of.
113117It's the actual test you're running. The code you want to benchmark goes inside this method.
114- All other annotations are used to control different things in measuring operations of benchmark methods .
118+ All other annotations are employed to configure the benchmark's environment and execution .
115119
116120Benchmark methods may include only a single [ Blackhole] ( #blackhole ) type as an argument, or have no arguments at all.
117121It's important to note that in Kotlin/JVM benchmark methods must always be ` public ` .
@@ -126,30 +130,29 @@ which means the toolkit will measure the performance of the operation of summing
126130The ` @BenchmarkMode ` annotation sets the mode of operation for the benchmark.
127131
128132Applying the ` @BenchmarkMode ` annotation requires specifying a mode from the ` Mode ` enum.
129- In Kotlin/JVM, the ` Mode ` enum has several options, including ` SingleShotTime ` .
130-
131- Refer to [ JMH documentation of Mode] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Mode.html )
132- for details about available options. In other targets, only ` Throughput ` and ` AverageTime ` are available.
133133` Mode.Throughput ` measures the raw throughput of your code in terms of the number of operations it can perform per unit
134134of time, such as operations per second. ` Mode.AverageTime ` is used when you're more interested in the average time it
135135takes to execute an operation. Without an explicit ` @BenchmarkMode ` annotation, the toolkit defaults to ` Mode.Throughput ` .
136+ In Kotlin/JVM, the ` Mode ` enum has a few more options, including ` SingleShotTime ` .
137+ Refer to [ JMH documentation of Mode] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Mode.html )
138+ for details about available options in Kotlin/JVM.
136139
137140The annotation is put at the enclosing class and has the effect over all ` @Benchmark ` methods in the class.
138141In Kotlin/JVM, it may be put at ` @Benchmark ` method to have effect on that method only.
139142Refer to [ JMH documentation of @BenchmarkMode ] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/BenchmarkMode.html )
140143for details about the effect of the annotation in Kotlin/JVM.
141144
142- In our example, ` @BenchmarkMode(Mode.Throughput ) ` is used, meaning the benchmark focuses on the number of times the
143- benchmark method can be executed per unit of time .
145+ In our example, ` @BenchmarkMode(Mode.AverageTime ) ` is used, indicating that the benchmark aims to measure the
146+ average execution time of the benchmark method .
144147
145148### @OutputTimeUnit
146149
147150The ` @OutputTimeUnit ` annotation specifies the time unit in which your results will be presented.
148151This time unit can range from minutes to nanoseconds. If a piece of code executes within a few milliseconds,
149- presenting the result in milliseconds or microseconds provides a more accurate and detailed measurement.
150- Conversely, for operations with longer execution times, you might choose to display the output in microseconds , seconds, or even minutes.
152+ presenting the result in nanoseconds or microseconds provides a more accurate and detailed measurement.
153+ Conversely, for operations with longer execution times, you might choose to display the output in milliseconds , seconds, or even minutes.
151154Essentially, the ` @OutputTimeUnit ` annotation enhances the readability and interpretability of benchmark results.
152- If this annotation isn't specified, it defaults to using seconds as the time unit .
155+ By default, if the annotation is not specified, results are presented in seconds.
153156
154157The annotation is put at the enclosing class and has the effect over all ` @Benchmark ` methods in the class.
155158In Kotlin/JVM, it may be put at ` @Benchmark ` method to have effect on that method only.
@@ -165,27 +168,27 @@ During this warmup phase, the code in your `@Benchmark` method is executed sever
165168in the final benchmark results. The primary purpose of the warmup phase is to let the system "warm up" and reach its
166169optimal performance state so that the results of measurement iterations are more stable.
167170
168- The annotation is put at the enclosing class and have the effect over all ` @Benchmark ` methods in the class.
171+ The annotation is put at the enclosing class and has the effect over all ` @Benchmark ` methods in the class.
169172In Kotlin/JVM, it may be put at ` @Benchmark ` method to have effect on that method only.
170173Refer to [ JMH documentation of @Warmup ] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Warmup.html )
171174for details about the effect of the annotation in Kotlin/JVM.
172175
173- In our example, the ` @Warmup ` annotation is used to allow 20 iterations, each lasting one second,
174- of executing the benchmark method before the actual measurement starts .
176+ In our example, the ` @Warmup ` annotation is used to allow 10 iterations of executing the benchmark method before
177+ the actual measurement starts. Each iteration lasts 500 milliseconds .
175178
176179### @Measurement
177180
178181The ` @Measurement ` annotation controls the properties of the actual benchmarking phase.
179182It sets how many iterations the benchmark method is run and how long each run should last.
180183The results from these runs are recorded and reported as the final benchmark results.
181184
182- The annotation is put at the enclosing class and have the effect over all ` @Benchmark ` methods in the class.
185+ The annotation is put at the enclosing class and has the effect over all ` @Benchmark ` methods in the class.
183186In Kotlin/JVM, it may be put at ` @Benchmark ` method to have effect on that method only.
184187Refer to [ JMH documentation of @Measurement ] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Measurement.html )
185188for details about the effect of the annotation in Kotlin/JVM.
186189
187- In our example, the ` @Measurement ` annotation specifies that the benchmark method will be run 20 iterations
188- for a duration of one second for the final performance measurement.
190+ In our example, the ` @Measurement ` annotation specifies that the benchmark method will run 20 iterations,
191+ with each iteration lasting one second, for the final performance measurement.
189192
190193### @Param
191194
@@ -213,7 +216,7 @@ for available annotations.
213216
214217Modern compilers often eliminate computations they find unnecessary, which can distort benchmark results.
215218In essence, ` Blackhole ` maintains the integrity of benchmarks by preventing unwanted optimizations such as dead-code
216- elimination by the compiler or the runtime virtual machine. A ` Blackhole ` is used when the benchmark produces several values.
219+ elimination by the compiler or the runtime virtual machine. A ` Blackhole ` should be used when the benchmark produces several values.
217220If the benchmark produces a single value, just return it. It will be implicitly consumed by a ` Blackhole ` .
218221
219222### How to Use Blackhole:
@@ -232,5 +235,5 @@ fun iterateBenchmark(bh: Blackhole) {
232235By consuming results, you signal to the compiler that these computations are significant and shouldn't be optimized away.
233236
234237For a deeper dive into ` Blackhole ` and its nuances in JVM, you can refer to:
235- - [ Official Javadocs] ( https://javadoc.io/static /org.openjdk.jmh/jmh-core/1.23 /org/openjdk/jmh/infra/Blackhole.html )
238+ - [ Official Javadocs] ( https://javadoc.io/doc /org.openjdk.jmh/jmh-core/latest /org/openjdk/jmh/infra/Blackhole.html )
236239- [ JMH] ( https://github.com/openjdk/jmh/blob/1.37/jmh-core/src/main/java/org/openjdk/jmh/infra/Blackhole.java#L157-L254 )
0 commit comments