Choosing the Right Radix: Measurement or Mathematics?

I recently wrote a post about radix sorting, and found that for large arrays of unsigned integers a handwritten implementation beats Arrays.sort. However, I paid no attention to the choice of radix and used a default of eight bits. It turns out this was a lucky choice: modifying my benchmark to parametrise the radix, I observed a maximum at one byte, regardless of the size of array.

Is this an algorithmic or technical phenomenon? Is this something that could have been predicted on the back of an envelope without running a benchmark?

Extended Benchmark Results

Size Radix Score Score Error (99.9%) Unit
100 2 135.559923 7.72397 ops/ms
100 4 262.854579 37.372678 ops/ms
100 8 345.038234 30.954927 ops/ms
100 16 7.717496 1.144967 ops/ms
1000 2 13.892142 1.522749 ops/ms
1000 4 27.712719 4.057162 ops/ms
1000 8 52.253497 4.761172 ops/ms
1000 16 7.656033 0.499627 ops/ms
10000 2 1.627354 0.186948 ops/ms
10000 4 3.620869 0.029128 ops/ms
10000 8 6.435789 0.610848 ops/ms
10000 16 3.703248 0.45177 ops/ms
100000 2 0.144575 0.014348 ops/ms
100000 4 0.281837 0.025707 ops/ms
100000 8 0.543274 0.031553 ops/ms
100000 16 0.533998 0.126949 ops/ms
1000000 2 0.011293 0.001429 ops/ms
1000000 4 0.021128 0.003137 ops/ms
1000000 8 0.037376 0.005783 ops/ms
1000000 16 0.031053 0.007987 ops/ms

Modeling

To model the execution time of the algorithm, we can write t = f(r, n), where n \in \mathbb{N} is the length of the input array, and r \in [1, 32) is the size in bits of the radix. We can inspect if the model predicts non-monotonic execution time with a minimum (corresponding to maximal throughput), or if t increases indefinitely as a function of r. If we find a plausible model predicting a minimum, temporarily treating r as continuous, we can solve \frac{\partial f}{\partial r}|_{n=N, r \in [1,32)} = 0 to find the theoretically optimal radix. This pre-supposes we derive a non-monotonic model.

Constructing a Model

We need to write down an equation before we can do any calculus, which requires two dangerous assumptions.

  1. Each operation has the same cost, making the execution time proportional to the number of operation.
  2. The costs of operations do not vary as a function of n or r.

This means all we need to do is find a formula for the number of operations, and then vary n and r. The usual pitfall in this approach relates to the first assumption, in that memory accesses are modelled as uniform cost; memory access can vary widely in cost ranging from registers to RAM on another socket. We are about to fall foul of both assumptions constructing an intuitive model of the algorithm’s loop.


        while (shift < Integer.SIZE) {
            Arrays.fill(histogram, 0);
            for (int i = 0; i < data.length; ++i) {
                ++histogram[((data[i] & mask) >> shift) + 1];
            }
            for (int i = 0; i < 1 << radix; ++i) {
                histogram[i + 1] += histogram[i];
            }
            for (int i = 0; i < data.length; ++i) {
                copy[histogram[(data[i] & mask) >> shift]++] = data[i];
            }
            for (int i = 0; i < data.length; ++i) {
                data[i] = copy[i];
            }
            shift += radix;
            mask <<= radix;
        }

The outer loop depends on the choice of radix while the inner loops depend on the size of the array and the choice of radix. There are five obvious aspects to capture:

  • The first inner loop takes time proportional to n
  • The third and fourth inner loops take time proportional to n
  • We can factor the per-element costs of loops 1, 3 and 4 into a constant a
  • The second inner loop takes time proportional to 2^r, modeled with by the term b2^r
  • The body of the loop executes 32/r times

This can be summarised as the formula:

f(r, n) = 32\frac{(3an + b2^r)}{r}

It was claimed the algorithm had linear complexity in n and it only has a linear term in n. Good. However, the exponential r term in the numerator dominates the linear term in the denominator, making the function monotonic in r. The model fails to predict the observed throughput maximising radix. There are clearly much more complicated mechanisms at play than can be captured counting operations.

Sorting Unsigned Integers Faster in Java

I discovered a curious resource for audio-visualising sort algorithms, which is exciting for two reasons. The first is that I finally feel like I understand Alexander Scriabin: he was not a composer. He discovered Tim Sort 80 years before Tim Peters and called it Black Mass. (If you aren’t familiar with the piece, fast-forward to 1:40 to hear the congruence.)

The second reason was that I noticed Radix Sort (LSD). While it was an affront to my senses, it used a mere 800 array accesses and no comparisons! I was unaware of this algorithm so delved deeper and implemented it for integers, and benchmarked my code against Arrays.sort.

Radix Sort

It is taken as given by many (myself included, or am I just projecting my thoughts on to others?) that O(n \log n) is the best you can do in a sort algorithm. But this is actually only true for sort algorithms which depend on comparison. If you can afford to restrict the data types your sort algorithm supports to types with a positional interpretation (java.util can’t because it needs to be ubiquitous and maintainable), you can get away with a linear time algorithm.

Radix sort, along with the closely related counting sort, does not use comparisons. Instead, the data is interpreted as a fixed length string of symbols. For each position, the cumulative histogram of symbols is computed to calculate sort indices. While the data needs to be scanned several times, the algorithm scales linearly and the overhead of the multiple scans is amortised for large arrays.

As you can see on Wikipedia, there are two kinds of radix sort: Least Significant Digit and Most Significant Digit. This dichotomy relates to the order the (representational) string of symbols is traversed in. I implemented and benchmarked the LSD version for integers.

Implementation

The implementation interprets an integer as the concatenation of n bit string symbols of fixed size size 32/n. It performs n passes over the array, starting with the least significant bits, which it modifies in place. For each pass the data is scanned three times, in order to:

  1. Compute the cumulative histogram over the symbols in their natural sort order
  2. Copy the value with symbol k to the mth position in a buffer, where m is defined by the cumulative density of k.
  3. Copy the buffer back into the original array

The implementation, which won’t work unless the chunks are proper divisors of 32, is below. The bonus (or caveat) is that it automatically supports unsigned integers. The code could be modified slightly to work with signed integers at a performance cost.


import java.util.Arrays;

public class RadixSort {

    private final int radix;

    public RadixSort() {
        this(Byte.SIZE);
    }

    public RadixSort(int radix) {
        assert 32 % radix== 0;
        this.radix= radix;
    }

    public void sort(int[] data) {
        int[] histogram = new int[(1 << radix) + 1];
        int shift = 0;
        int mask = (1 << radix) - 1;
        int[] copy = new int[data.length];
        while (shift < Integer.SIZE) {
            Arrays.fill(histogram, 0);
            for (int i = 0; i < data.length; ++i) {
                ++histogram[((data[i] & mask) >> shift) + 1];
            }
            for (int i = 0; i < 1 << radix; ++i) {
                histogram[i + 1] += histogram[i];
            }
            for (int i = 0; i < data.length; ++i) {
                copy[histogram[(data[i] & mask) >> shift]++] = data[i];
            }
            for (int i = 0; i < data.length; ++i) {
                data[i] = copy[i];
            }
            shift += radix;
            mask <<= radix;
        }
    }
}

The time complexity is obviously linear, a temporary buffer is allocated, but in comparison to Arrays.sort it looks fairly spartan. Instinctively, cache locality looks fairly poor because the second inner loop of the three jumps all over the place. Will this implementation beat Arrays.sort (for integers)?

Benchmark

The algorithm is measured using arrays of random positive integers, for which both algorithms are equivalent, from a range of sizes. This isn’t always the best idea (the Tim Sort algorithm comes into its own on nearly sorted data), so take the result below with a pinch of salt. Care must be taken to copy the array in the benchmark since both algorithms are in-place.


public void launchBenchmark(String... jvmArgs) throws Exception {
        Options opt = new OptionsBuilder()
                .include(this.getClass().getName() + ".*")
                .mode(Mode.SampleTime)
                .mode(Mode.Throughput)
                .timeUnit(TimeUnit.MILLISECONDS)
                .measurementTime(TimeValue.seconds(10))
                .warmupIterations(10)
                .measurementIterations(10)
                .forks(1)
                .shouldFailOnError(true)
                .shouldDoGC(true)
                .jvmArgs(jvmArgs)
                .resultFormat(ResultFormatType.CSV) 
                .build();

        new Runner(opt).run();
    }

    @Benchmark
    public void Arrays_Sort(Data data, Blackhole bh) {
        int[] array = Arrays.copyOf(data.data, data.size);
        Arrays.sort(array);
        bh.consume(array);
    }

    @Benchmark
    public void Radix_Sort(Data data, Blackhole bh) {
        int[] array = Arrays.copyOf(data.data, data.size);
        data.radixSort.sort(array);
        bh.consume(array);
    }

    @State(Scope.Thread)
    public static class Data {

        @Param({"100", "1000", "10000", "100000", "1000000"})
        int size;

        int[] data;
        RadixSort radixSort = new RadixSort();

        @Setup(Level.Trial)
        public void init() {
            data = createArray(size);
        }
    }

    public static int[] createArray(int size) {
        int[] array = new int[size];
        Random random = new Random(0);
        for (int i = 0; i < size; ++i) {
            array[i] = Math.abs(random.nextInt());
        }
        return array;
    }

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
Arrays_Sort thrpt 1 10 1304.687189 147.380334 ops/ms 100
Arrays_Sort thrpt 1 10 78.518664 9.339994 ops/ms 1000
Arrays_Sort thrpt 1 10 1.700208 0.091836 ops/ms 10000
Arrays_Sort thrpt 1 10 0.133835 0.007146 ops/ms 100000
Arrays_Sort thrpt 1 10 0.010560 0.000409 ops/ms 1000000
Radix_Sort thrpt 1 10 404.807772 24.930898 ops/ms 100
Radix_Sort thrpt 1 10 51.787409 4.881181 ops/ms 1000
Radix_Sort thrpt 1 10 6.065590 0.576709 ops/ms 10000
Radix_Sort thrpt 1 10 0.620338 0.068776 ops/ms 100000
Radix_Sort thrpt 1 10 0.043098 0.004481 ops/ms 1000000
Arrays_Sort sample 1 3088586 0.000902 0.000018 ms/op 100
Arrays_Sort·p0.00 sample 1 1 0.000394 ms/op 100
Arrays_Sort·p0.50 sample 1 1 0.000790 ms/op 100
Arrays_Sort·p0.90 sample 1 1 0.000791 ms/op 100
Arrays_Sort·p0.95 sample 1 1 0.001186 ms/op 100
Arrays_Sort·p0.99 sample 1 1 0.001974 ms/op 100
Arrays_Sort·p0.999 sample 1 1 0.020128 ms/op 100
Arrays_Sort·p0.9999 sample 1 1 0.084096 ms/op 100
Arrays_Sort·p1.00 sample 1 1 4.096000 ms/op 100
Arrays_Sort sample 1 2127794 0.011876 0.000037 ms/op 1000
Arrays_Sort·p0.00 sample 1 1 0.007896 ms/op 1000
Arrays_Sort·p0.50 sample 1 1 0.009872 ms/op 1000
Arrays_Sort·p0.90 sample 1 1 0.015408 ms/op 1000
Arrays_Sort·p0.95 sample 1 1 0.024096 ms/op 1000
Arrays_Sort·p0.99 sample 1 1 0.033920 ms/op 1000
Arrays_Sort·p0.999 sample 1 1 0.061568 ms/op 1000
Arrays_Sort·p0.9999 sample 1 1 0.894976 ms/op 1000
Arrays_Sort·p1.00 sample 1 1 4.448256 ms/op 1000
Arrays_Sort sample 1 168991 0.591169 0.001671 ms/op 10000
Arrays_Sort·p0.00 sample 1 1 0.483840 ms/op 10000
Arrays_Sort·p0.50 sample 1 1 0.563200 ms/op 10000
Arrays_Sort·p0.90 sample 1 1 0.707584 ms/op 10000
Arrays_Sort·p0.95 sample 1 1 0.766976 ms/op 10000
Arrays_Sort·p0.99 sample 1 1 0.942080 ms/op 10000
Arrays_Sort·p0.999 sample 1 1 2.058273 ms/op 10000
Arrays_Sort·p0.9999 sample 1 1 7.526102 ms/op 10000
Arrays_Sort·p1.00 sample 1 1 46.333952 ms/op 10000
Arrays_Sort sample 1 13027 7.670135 0.021512 ms/op 100000
Arrays_Sort·p0.00 sample 1 1 6.356992 ms/op 100000
Arrays_Sort·p0.50 sample 1 1 7.634944 ms/op 100000
Arrays_Sort·p0.90 sample 1 1 8.454144 ms/op 100000
Arrays_Sort·p0.95 sample 1 1 8.742502 ms/op 100000
Arrays_Sort·p0.99 sample 1 1 9.666560 ms/op 100000
Arrays_Sort·p0.999 sample 1 1 12.916883 ms/op 100000
Arrays_Sort·p0.9999 sample 1 1 28.037900 ms/op 100000
Arrays_Sort·p1.00 sample 1 1 28.573696 ms/op 100000
Arrays_Sort sample 1 1042 96.278673 0.603645 ms/op 1000000
Arrays_Sort·p0.00 sample 1 1 86.114304 ms/op 1000000
Arrays_Sort·p0.50 sample 1 1 94.896128 ms/op 1000000
Arrays_Sort·p0.90 sample 1 1 104.293990 ms/op 1000000
Arrays_Sort·p0.95 sample 1 1 106.430464 ms/op 1000000
Arrays_Sort·p0.99 sample 1 1 111.223767 ms/op 1000000
Arrays_Sort·p0.999 sample 1 1 134.172770 ms/op 1000000
Arrays_Sort·p0.9999 sample 1 1 134.742016 ms/op 1000000
Arrays_Sort·p1.00 sample 1 1 134.742016 ms/op 1000000
Radix_Sort sample 1 2240042 0.002941 0.000033 ms/op 100
Radix_Sort·p0.00 sample 1 1 0.001578 ms/op 100
Radix_Sort·p0.50 sample 1 1 0.002368 ms/op 100
Radix_Sort·p0.90 sample 1 1 0.003556 ms/op 100
Radix_Sort·p0.95 sample 1 1 0.004344 ms/op 100
Radix_Sort·p0.99 sample 1 1 0.011056 ms/op 100
Radix_Sort·p0.999 sample 1 1 0.027232 ms/op 100
Radix_Sort·p0.9999 sample 1 1 0.731127 ms/op 100
Radix_Sort·p1.00 sample 1 1 5.660672 ms/op 100
Radix_Sort sample 1 2695825 0.018553 0.000038 ms/op 1000
Radix_Sort·p0.00 sample 1 1 0.013424 ms/op 1000
Radix_Sort·p0.50 sample 1 1 0.016576 ms/op 1000
Radix_Sort·p0.90 sample 1 1 0.025280 ms/op 1000
Radix_Sort·p0.95 sample 1 1 0.031200 ms/op 1000
Radix_Sort·p0.99 sample 1 1 0.050944 ms/op 1000
Radix_Sort·p0.999 sample 1 1 0.082944 ms/op 1000
Radix_Sort·p0.9999 sample 1 1 0.830295 ms/op 1000
Radix_Sort·p1.00 sample 1 1 6.660096 ms/op 1000
Radix_Sort sample 1 685589 0.145695 0.000234 ms/op 10000
Radix_Sort·p0.00 sample 1 1 0.112512 ms/op 10000
Radix_Sort·p0.50 sample 1 1 0.128000 ms/op 10000
Radix_Sort·p0.90 sample 1 1 0.196608 ms/op 10000
Radix_Sort·p0.95 sample 1 1 0.225792 ms/op 10000
Radix_Sort·p0.99 sample 1 1 0.309248 ms/op 10000
Radix_Sort·p0.999 sample 1 1 0.805888 ms/op 10000
Radix_Sort·p0.9999 sample 1 1 1.818141 ms/op 10000
Radix_Sort·p1.00 sample 1 1 14.401536 ms/op 10000
Radix_Sort sample 1 60843 1.641961 0.005783 ms/op 100000
Radix_Sort·p0.00 sample 1 1 1.251328 ms/op 100000
Radix_Sort·p0.50 sample 1 1 1.542144 ms/op 100000
Radix_Sort·p0.90 sample 1 1 2.002944 ms/op 100000
Radix_Sort·p0.95 sample 1 1 2.375680 ms/op 100000
Radix_Sort·p0.99 sample 1 1 3.447030 ms/op 100000
Radix_Sort·p0.999 sample 1 1 5.719294 ms/op 100000
Radix_Sort·p0.9999 sample 1 1 8.724165 ms/op 100000
Radix_Sort·p1.00 sample 1 1 13.074432 ms/op 100000
Radix_Sort sample 1 4846 20.640787 0.260926 ms/op 1000000
Radix_Sort·p0.00 sample 1 1 14.893056 ms/op 1000000
Radix_Sort·p0.50 sample 1 1 18.743296 ms/op 1000000
Radix_Sort·p0.90 sample 1 1 26.673152 ms/op 1000000
Radix_Sort·p0.95 sample 1 1 30.724915 ms/op 1000000
Radix_Sort·p0.99 sample 1 1 40.470446 ms/op 1000000
Radix_Sort·p0.999 sample 1 1 63.016600 ms/op 1000000
Radix_Sort·p0.9999 sample 1 1 136.052736 ms/op 1000000
Radix_Sort·p1.00 sample 1 1 136.052736 ms/op 1000000

The table tells an interesting story. Arrays.sort is vastly superior for small arrays (the arrays most people have), but for large arrays the custom implementation comes into its own. Interestingly, this is consistent with the computer science. If you need to sort large arrays of (unsigned) integers and care about performance, think about implementing radix sort.

Targeting SIMD in Java

Vectorised instruction execution can be targeted directly in C++ but with Java there are extra layers of abstraction to go through. Folklore aside, when does vectorisation or SIMD execution actually happen in Java? Skeptical of old wives’ tales, I investigate when SIMD instructions are actually used in Java 9, and how to disable it by programming badly.

Building a Benchmark Harness

I use JMH to write benchmarks. For the uninitiated, it is the standard Java micro-benchmarking framework and handles various pitfalls, the most salient of which being that it ensures that your code actually executes, and ensures measurement is performed against a monotonic measure of time. Benchmarks produced without JMH are unlikely to be correct and should arouse suspicion, especially if used during a sales process.

Averages always lie, so minimisation of 99.9th percentile execution time is a better objective to have in mind. As a general performance indicator I measure throughput in operations per time unit, which is useful to correlate with JVM compilation events.

To tune performance, before even worrying about achieving SIMD execution, we need to be aware of recompilation and failures to inline. These features are enabled by the arguments below and my simple harness has a specific mode to support these diagnostics:

-XX:+PrintCompilation 
-XX:+UnlockDiagnosticVMOptions 
-XX:+PrintInlining

The proof that code is actually being vectorised is to observe the emission of AVX instructions. To prove this has happened (and if it does, it will be correlated with astonishing performance statistics), we need to see the assembly code so I run the benchmark in a mode that will print out the generated assembly via the arguments:

-XX:+UnlockDiagnosticVMOptions
-XX:+PrintAssembly
-XX:PrintAssemblyOptions=hsdis-print-bytes 
-XX:CompileCommand=print

However, SIMD execution only happens when SuperWord parallelism is enabled (and by default, it is) so we won’t even need to look at the assembly unless we see a clear difference when running the benchmark without the UseSuperWord option:

-XX:-UseSuperWord

What Gains Can Be Expected?

Is vectorised code execution a panacea? Assuming we can force the JVM to use SIMD, how much performance can be expected? It turns out that java.util.Arrays.fill can be vectorised, so we can get a taste of the difference it makes. We can observe its impact by benchmarking throughput with and without SuperWord instruction level parallelism.


    private void benchmark(String... jvmArgs) throws RunnerException {
            Options opt = new OptionsBuilder()
                    .include(this.getClass().getName() + ",*")
                    .mode(Mode.Throughput)
                    .timeUnit(TimeUnit.MILLISECONDS)
                    .warmupIterations(5)
                    .measurementIterations(5)
                    .forks(1)
                    .shouldFailOnError(true)
                    .shouldDoGC(true)
                    .operationsPerInvocation(1_000_000)
                    .jvmArgs(jvmArgs)
                    .build();
            new Runner(opt).run();
    }

    @Benchmark
    public void fillArray(Blackhole bh)  {
        double[] array = new double[1 << 10];
        for(int i = 0; i << 1_000_000; ++i) {
            Arrays.fill(array, i);
            bh.consume(array);
        }
    }

# VM version: JDK 9-ea, VM 9-ea+166
...
# VM options: -XX:-UseSuperWord
...

Benchmark                 Mode  Cnt    Score     Error   Units
TestBenchmark.fillArray  thrpt    5  966.947 ± 596.705  ops/ms

# VM version: JDK 9-ea, VM 9-ea+166
...
# VM options: -XX:+UseSuperWord
...

Benchmark                 Mode  Cnt     Score      Error   Units
TestBenchmark.fillArray  thrpt    5  4230.802 ± 5311.001  ops/ms

The difference in throughput is palpable and astonishing.

Intrinsics

Various intrinsics, such as Arrays.fill above, compile down to vectorised instructions. When SuperWord parallelism is enabled, they will always run faster. The JIT compiler will also target simple hand-written code, but intrinsics are a safer bet.

Adding Arrays Together

Addition of arrays will either use SIMD or not; it depends how you go about writing your code. If your code is too complicated then you will not achieve vectorised execution. I will start from a method which does not vectorise, transform it to one that does by a process of simplification, and then break it again by trying to be too clever. The code which does vectorise is the same naive code I would have written in C++ as a student, ignorant of JVM internals. All of the code is available at github.

To start off with I created a highly contrived class called BadCode which is actually inspired, albeit seriously exacerbated, by a generic function in an API I have seen in a professional setting. The so called object oriented (sic) API seeks to be able to operate on any type of primitive array, so takes the lowest common denominator (java.lang.Object) as parameters and performs instanceof checks to get the correctly typed array instance. While the code is bloated, this provides as much flexibility as possible, which suits the supplier of the API – which has many clients with disparate use cases – but not necessarily the performance constrained caller.

The code is too bloated to include here but can be seen on github. It has two major performance bugs which will be addressed in turn:

  1. It’s verbose enough not to inline
  2. It has more than one exit point

Because the method supports so many use cases, the code size is very large (2765 bytes). By default, any method with a code size of greater than 2000 bytes will fail to inline. Inlining is at the very base of our hierarchy of performance needs. We can see that the JIT compiler has failed to inline the method by printing inlining and compilation:

   2193  715 %     4       com.openkappa.simd.generated.TestBenchmark_badClass_jmhTest::badClass_thrpt_jmhStub @ 13 (55 bytes)
                              @ 19   com.openkappa.simd.TestBenchmark::badClass (9 bytes)   force inline by CompileCommand
                                @ 2   com.openkappa.simd.TestBenchmark$BadClassState::compute (16 bytes)   inline (hot)
                                  @ 12   com.openkappa.simd.BadCode::bigMethod (2765 bytes)   hot method too big

Taking a look at our throughput, we can see there is a problem. The code isn’t getting any faster, so clearly no optimisations are being applied.
# Run progress: 0.00% complete, ETA 00:00:40
# Fork: 1 of 1
# Warmup Iteration   1: 0.193 ops/us
# Warmup Iteration   2: 0.176 ops/us
# Warmup Iteration   3: 0.216 ops/us
# Warmup Iteration   4: 0.183 ops/us
# Warmup Iteration   5: 0.294 ops/us
Iteration   1: 0.260 ops/us
Iteration   2: 0.224 ops/us
Iteration   3: 0.170 ops/us
Iteration   4: 0.169 ops/us
Iteration   5: 0.169 ops/us

Our three nines is terrible, taking up to six microseconds to add two arrays:
TestBenchmark.badClass:badClass·p0.999         sample       6.296           us/op

Regardless of the large code size, the approach taken in the API has enforced multiple exit points in the method, which will always disable SIMD execution. Running with SuperWord disabled does not worsen performance, implying our code is not getting vectorised.

Smaller Code

Having noticed that the method is not getting inlined, let alone compiling to AVX instructions, we need to make the code smaller first. In this scenario our values are always double[] so the API provider effectively forces us to pay for the disparate use cases they must support, and this taxation without representation harms performance. We can rewrite it to be smaller, targeting our own specific use case. The code is concise enough to include here, and is the code any student would write to perform element-wise addition of two arrays. Notice the loop condition.


public class SmallerCode {

    public double[] smallMethod(double[] left, double[] right) {
        double[] result = new double[left.length];
        for(int i = 0; i < left.length && i < right.length; ++i) {
            result[i] = left[i] + right[i];
        }
        return result;
    }
}

Let’s benchmark SmallerCode, where the same code has a size of 43B. The method is indeed inlined.

   1374  647       3       com.openkappa.simd.TestBenchmark$SmallerCodeState::compute (16 bytes)   made not entrant
                              @ 12   com.openkappa.simd.SmallerCode::smallMethod (43 bytes)   inline (hot)

Throughput is doubled and we see evidence of dynamic optimisation.
# Run progress: 25.00% complete, ETA 00:02:42
# Fork: 1 of 1
# Warmup Iteration   1: 0.372 ops/us
# Warmup Iteration   2: 0.340 ops/us
# Warmup Iteration   3: 0.432 ops/us
# Warmup Iteration   4: 0.497 ops/us
# Warmup Iteration   5: 0.499 ops/us
Iteration   1: 0.398 ops/us
Iteration   2: 0.364 ops/us
Iteration   3: 0.408 ops/us
Iteration   4: 0.544 ops/us
Iteration   5: 0.401 ops/us

This code is twice as fast, and our three nines is better, just by virtue of keeping the code simple.
TestBenchmark.smallerCode:smallerCode·p0.999             sample       2.164          us/op

But are we getting SIMD execution? Possibly – disabling SuperWord yields noticeably worse results.
# Run progress: 25.00% complete, ETA 00:02:22
# Fork: 1 of 1
# Warmup Iteration   1: 0.261 ops/us
# Warmup Iteration   2: 0.343 ops/us
# Warmup Iteration   3: 0.294 ops/us
# Warmup Iteration   4: 0.320 ops/us
# Warmup Iteration   5: 0.316 ops/us
Iteration   1: 0.293 ops/us
Iteration   2: 0.276 ops/us
Iteration   3: 0.304 ops/us
Iteration   4: 0.291 ops/us
Iteration   5: 0.279 ops/us

It’s worth inspecting the assembly to see if we can observe the emission of AVX instructions once we reenable UseSuperWord. Assembly is easier to read if inlining is disabled, this can be controlled in JMH with this annotation: @CompilerControl(CompilerControl.Mode.DONT_INLINE). Applying the annotation, the assembly is printed using the appropriate JVM args:
-XX:+UnlockDiagnosticVMOptions 
-XX:+PrintAssembly
-XX:PrintAssemblyOptions=hsdis-print-bytes 
-XX:CompileCommand=print

0x00000154edb691e0: vmovdqu ymm0,ymmword ptr [rbp+r10*8+10h]

0x00000154edb691e7: vaddpd  ymm0,ymm0,ymmword ptr [rdx+r10*8+10h]

0x00000154edb691ee: vmovdqu ymmword ptr [r8+r10*8+10h],ymm0

0x00000154edb691f5: add     r10d,4h           

0x00000154edb691f9: cmp     r10d,ebx          

0x00000154edb691fc: jl      154edb691e0h      

It’s true – this code is indeed vectorised – see the AVX instructions in bold!

Ruining It

Having witnessed vectorisation when adding arrays together, let’s try and break it. There are a few patterns we can apply to break our vectorised code:

  • Putting an OR condition as the loop condition
  • Putting a non-inlined method inside the loop
  • Putting an arbitrary method as the loop condition
  • Manually unrolling the loop
  • Using a long as the loop variable
  • Multiple exit points

The list goes on but now let’s really fuck it up. We were using double[] so let’s see what happens if we use a DirectByteBuffer as the backing for a homegrown vector construct instead. Instead of returning a new heap allocated array, we will write our doubles into a byte buffer, and use an offset and length to delineate the arrays. For instance, the code below will write the sum of two arrays stored in a byte buffer back into the same byte buffer. We can abstract vectors by creating small on-heap objects for each array which know the offset and length of each array in the buffer.


public int add(ByteBuffer byteBuffer,
               int leftOffset, int leftLength,
               int rightOffset, int rightLength,
               int resultOffset) {
        int resultIndex = resultOffset;
        for(int l = leftOffset, r = rightOffset;
            l < leftOffset + leftLength && r < rightOffset + rightLength;
            l += 8, r += 8, resultIndex += 8) {
            byteBuffer.putDouble(resultIndex, byteBuffer.getDouble(l) + byteBuffer.getDouble(r));
        }
        return resultIndex;
    }

Is this clever? We’ve beaten the garbage collector. It certainly feels like a clever engineering story. No, performance wise, we are back to where we were with the remarkably convoluted and conflated catch all function. Entertainingly, had we only ever benchmarked against BadCode::bigMethod we may not have noticed this performance regression.

# Run progress: 0.00% complete, ETA 00:00:40
# Fork: 1 of 1
# Warmup Iteration   1: 0.156 ops/us
# Warmup Iteration   2: 0.160 ops/us
# Warmup Iteration   3: 0.198 ops/us
# Warmup Iteration   4: 0.190 ops/us
# Warmup Iteration   5: 0.272 ops/us
Iteration   1: 0.220 ops/us
Iteration   2: 0.242 ops/us
Iteration   3: 0.216 ops/us
Iteration   4: 0.248 ops/us
Iteration   5: 0.351 ops/us

TestBenchmark.byteBuffer:byteBuffer·p0.999     sample       6.552           us/op

The obvious indicator that this is not being vectorised is that performance does not degrade when setting
-XX:-UseSuperWord

And to be sure we can inspect the assembly code emitted (without disabling UseSuperWord!).
  0x000001869216cb67: mov     edx,ecx

  0x000001869216cb69: add     edx,r9d

  0x000001869216cb6c: cmp     ecx,edx

  0x000001869216cb6e: jnl     1869216cb1dh

  0x000001869216cb70: mov     r13d,dword ptr [r12+r11*8+8h]
                                                
  0x000001869216cb75: cmp     r13d,0f8007ed8h
                                      
  0x000001869216cb7c: jne     1869216d015h

  0x000001869216cb82: lea     r13,[r12+r11*8]

  0x000001869216cb86: movzx   eax,byte ptr [r13+29h]

  0x000001869216cb8b: test    eax,eax

  0x000001869216cb8d: je      1869216d015h    

The reality is that whenever sun.misc.Unsafe is used, directly or indirectly, we lose access to SIMD. The bottom line is if you want to exploit instruction level parallelism, then be prepared to write code even a beginner could understand instantly. Bizarre off-heap data structures and vectorised execution? Unlikely.