Benchmarking
Tutorials
To run the tutorials repository, make sure you have Java 21 and the latest Maven installed (IntelliJ will install Maven for you). All homework templates use Java 21 unless stated otherwise. To run individual examples, follow the README.md for command-line instructions.
In these tutorials you're expected to set up your environment so that you can run Java programs. It is recommended that you use IntelliJ IDEA as an IDE. Verify that you're able to run the examples from the tutorials repository.
Homework
Your task is to compare the time performance of two sorting algorithms - mergesort and quicksort. Specifically, we are interested in their warmed-up performance on random arrays, and whether they differ on smaller versus larger input sizes.
Steps
-
Clone the homework skeleton (☰ → New → Project from Version Control).
-
Follow the
README.mdto set up and run the project. -
Use Java Microbenchmark Harness (JMH) to compare the two algorithms. Write your benchmarking code in
SortingBenchmark.java. -
Make sure to:
- choose an appropriate number of measurement iterations - enough for the results to visibly stabilize.
- choose an appropriate number of forks - check whether results vary significantly across forks; if so, increase the count.
- choose an appropriate number of warmup iterations.
- parametrize the array size using
@Paramto benchmark on both small and large arrays. - apply a dead code elimination prevention technique, even if it appears unnecessary on your machine - it may matter on others.
-
Avoid:
- reusing the same array instance across multiple iterations.
- benchmarking an array that is not randomly arranged.
- benchmarking a pre-sorted array (this can happen if the sorting algorithm mutates the input in place).
-
Create a
conclusion.txtcontaining:- which algorithm is faster on small and large datasets, and by what factor.
- your explanation of why one algorithm outperforms the other.
-
Zip
SortingBenchmark.java,conclusion.txt,sorting_benchmark.jsonand upload to AE. Do not include any other file. -
AE checks that your code compiles, parses your JSON results and displays them on a dashboard together with your conclusion. Points will be assigned after manual evaluation from the teacher.
Please submit your actual benchmark results in sorting_benchmark.json without modification. We will re-run your benchmarks in Brute after the deadline to verify them.
If AE can't parse your
sorting_benchmark.jsonand you believe that it is correct, please let us know.
Materials
- Tutorial slides
- Lecture slides
- Online JMH tutorials: Jenkov, Mkyong, Baeldung
- Common benchmarking pitfalls (Oracle)
- OpenJDK MicroBenchmarks wiki