Benchmarking

Benchmarking

Tutorials

To run the tutorials repository, make sure you have Java 21 and the latest Maven installed (IntelliJ will install Maven for you). All homework templates use Java 21 unless stated otherwise. To run individual examples, follow the README.md for command-line instructions.

In these tutorials you're expected to set up your environment so that you can run Java programs. It is recommended that you use IntelliJ IDEA as an IDE. Verify that you're able to run the examples from the tutorials repository.

Homework

Your task is to compare the time performance of two sorting algorithms - mergesort and quicksort. Specifically, we are interested in their warmed-up performance on random arrays, and whether they differ on smaller versus larger input sizes.

Steps

  1. Clone the homework skeleton (☰ → New → Project from Version Control).

  2. Follow the README.md to set up and run the project.

  3. Use Java Microbenchmark Harness (JMH) to compare the two algorithms. Write your benchmarking code in SortingBenchmark.java.

  4. Make sure to:

    • choose an appropriate number of measurement iterations - enough for the results to visibly stabilize.
    • choose an appropriate number of forks - check whether results vary significantly across forks; if so, increase the count.
    • choose an appropriate number of warmup iterations.
    • parametrize the array size using @Param to benchmark on both small and large arrays.
    • apply a dead code elimination prevention technique, even if it appears unnecessary on your machine - it may matter on others.
  5. Avoid:

    • reusing the same array instance across multiple iterations.
    • benchmarking an array that is not randomly arranged.
    • benchmarking a pre-sorted array (this can happen if the sorting algorithm mutates the input in place).
  6. Create a conclusion.txt containing:

    • which algorithm is faster on small and large datasets, and by what factor.
    • your explanation of why one algorithm outperforms the other.
  7. Zip SortingBenchmark.java, conclusion.txt, sorting_benchmark.json and upload to AE. Do not include any other file.

  8. AE checks that your code compiles, parses your JSON results and displays them on a dashboard together with your conclusion. Points will be assigned after manual evaluation from the teacher.

Please submit your actual benchmark results in sorting_benchmark.json without modification. We will re-run your benchmarks in Brute after the deadline to verify them.

If AE can't parse your sorting_benchmark.json and you believe that it is correct, please let us know.

Materials