Don’t Fear the Builder

Writing a nontrivial piece of code requires a programmer to make countless tiny decisions, and to produce good quality code quickly, many of these decisions must be made subconciously. But what happens if your instinct is mostly superstitious bullshit? I instinctively avoid allocating objects when I don’t have to, so would not voluntarily resort to using things like EqualsBuilder or HashCodeBuilder from Apache Commons. I feel dismayed whenever I am asked about the relationship between hash code and equals in interviews for contracts. This indicates a lack of mutual respect, but what’s revealing is that enough people passing themselves off as developers don’t know this stuff that it makes the question worth asking. EqualsBuilder and HashCodeBuilder make it so you don’t actually need to know, making it safe to put any object in a HashSet, whoever wrote the class. But should you use these classes just because they protect the naive (and you from the naive)? Is it a runtime tax? Or is any perceived cost magically eradicated by the JVM? It’s time to benchmark my instinct.

Baseline

There are definitely no allocations in the code IntelliJ (or similar) would generate for equals and hashCode automatically. I will use this generated code as the baseline: if I can match or beat this code with EqualsBuilder or HashCodeBuilder respectively, I can dismiss my prejudice. Here’s the baseline class:


public class Baseline {
    
    private final String a;
    private final int b;
    private final String c;
    private final Instant d;

    public Baseline(String a, int b, String c, Instant d) {
        this.a = a;
        this.b = b;
        this.c = c;
        this.d = d;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;

        Baseline baseline = (Baseline) o;

        if (b != baseline.b) return false;
        if (!a.equals(baseline.a)) return false;
        if (!c.equals(baseline.c)) return false;
        return d.equals(baseline.d);
    }

    @Override
    public int hashCode() {
        int result = a.hashCode();
        result = 31 * result + b;
        result = 31 * result + c.hashCode();
        result = 31 * result + d.hashCode();
        return result;
    }
}

It’s not pretty and it’s a real pain to keep generated code up to date when adding and removing fields (you can delete it and regenerate it but it creates merge noise) but there’s no obvious way to improve on this code. It’s correct and looks fast. To benchmark alternatives, I want to look at the effect on set membership queries against a HashSet of various sizes.

Builder Implementation

I will contrast this with the following code using code from Apache Commons (incidentally, also generated by IntelliJ):


public class Builder {

    private final String a;
    private final int b;
    private final String c;
    private final Instant d;

    public Builder(String a, int b, String c, Instant d) {
        this.a = a;
        this.b = b;
        this.c = c;
        this.d = d;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;

        if (o == null || getClass() != o.getClass()) return false;

        Builder builder = (Builder) o;

        return new EqualsBuilder()
                .append(b, builder.b)
                .append(a, builder.a)
                .append(c, builder.c)
                .append(d, builder.d)
                .isEquals();
    }

    @Override
    public int hashCode() {
        return new HashCodeBuilder(17, 37)
                .append(a)
                .append(b)
                .append(c)
                .append(d)
                .toHashCode();
    }
}

This code is a bit neater and easier to maintain, but does it have an observable cost?

HashSet Benchmark

For a parametrised range of HashSet sizes, I measure throughput for calls to contains. This requires that both hashCode and equals are called, the latter potentially several times. There is bias in this benchmark, because the implementation producing the best hash function will minimise the calls to equals, but I am willing to reward the better implementation rather than maniacally isolate any attributable allocation cost.

The code is simple, if a little repetitive. For each implementation, I make some random instances of my class, and make one that I can be sure isn’t in the set. I measure the cost of repeated invocations to the hashCode and equals methods by finding an object, and searching but failing to find an object separately.


@State(Scope.Benchmark)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
public class HashSetContainsBenchmark {

    @Param({"8", "96", "1024", "8192"})
    int setSize;

    private HashSet<Baseline> baselineSet;
    private Baseline presentBaseline;
    private Baseline missingBaseline;

    private HashSet<Builder> builderSet;
    private Builder presentBuilder;
    private Builder missingBuilder;

    @Setup(Level.Trial)
    public void setup() {
        setupBaselineState();
        setupBuilderState();
    }

    private void setupBaselineState() {
        baselineSet = new HashSet<>();
        for (int i = 0; i < setSize; ++i) {
            while(!baselineSet.add(newBaselineInstance()));
        }
        presentBaseline = baselineSet.iterator().next();
        do {
            missingBaseline = newBaselineInstance();
        } while (baselineSet.contains(missingBaseline));
    }

    private void setupBuilderState() {
        builderSet = new HashSet<>();
        for (int i = 0; i < setSize; ++i) {
            while(!builderSet.add(newBuilderInstance()));
        }
        presentBuilder = builderSet.iterator().next();
        do {
            missingBuilder = newBuilderInstance();
        } while (builderSet.contains(missingBuilder));
    }

    @Benchmark
    public boolean findPresentBaselineInstance() {
        return baselineSet.contains(presentBaseline);
    }

    @Benchmark
    public boolean findMissingBaselineInstance() {
        return baselineSet.contains(missingBaseline);
    }

    @Benchmark
    public boolean findPresentBuilderInstance() {
        return builderSet.contains(presentBuilder);
    }

    @Benchmark
    public boolean findMissingBuilderInstance() {
        return builderSet.contains(missingBuilder);
    }

    private static Baseline newBaselineInstance() {
        ThreadLocalRandom random = ThreadLocalRandom.current();
        return new Baseline(UUID.randomUUID().toString(),
                random.nextInt(),
                UUID.randomUUID().toString(),
                Instant.ofEpochMilli(random.nextLong(0, Instant.now().toEpochMilli())));
    }

    private static Builder newBuilderInstance() {
        ThreadLocalRandom random = ThreadLocalRandom.current();
        return new Builder(UUID.randomUUID().toString(),
                random.nextInt(),
                UUID.randomUUID().toString(),
                Instant.ofEpochMilli(random.nextLong(0, Instant.now().toEpochMilli())));
    }
}

Running this benchmark with org.apache.commons:commons-lang3:3.7 yields the throughput results below. There is an occasional and slight throughput degradation with the builders, but the builder implementation is sometimes faster in this benchmark, but the difference isn’t large enough to worry about.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: setSize
findMissingBaselineInstance thrpt 1 10 153.878772 7.993036 ops/us 8
findMissingBaselineInstance thrpt 1 10 135.835085 6.031588 ops/us 96
findMissingBaselineInstance thrpt 1 10 137.716073 25.731562 ops/us 1024
findMissingBaselineInstance thrpt 1 10 123.896741 15.484023 ops/us 8192
findMissingBuilderInstance thrpt 1 10 142.051528 4.177352 ops/us 8
findMissingBuilderInstance thrpt 1 10 132.326909 1.017351 ops/us 96
findMissingBuilderInstance thrpt 1 10 127.220440 8.680761 ops/us 1024
findMissingBuilderInstance thrpt 1 10 140.029701 6.272960 ops/us 8192
findPresentBaselineInstance thrpt 1 10 139.714236 1.626873 ops/us 8
findPresentBaselineInstance thrpt 1 10 140.244864 1.777298 ops/us 96
findPresentBaselineInstance thrpt 1 10 135.273760 7.587937 ops/us 1024
findPresentBaselineInstance thrpt 1 10 133.158555 5.101069 ops/us 8192
findPresentBuilderInstance thrpt 1 10 111.196443 18.804103 ops/us 8
findPresentBuilderInstance thrpt 1 10 124.008441 5.182294 ops/us 96
findPresentBuilderInstance thrpt 1 10 126.729750 4.286002 ops/us 1024
findPresentBuilderInstance thrpt 1 10 124.159285 5.287920 ops/us 8192

Where did the builder go?

It’s worth taking a look at the compiler output to find out what happened. Considering the case where the object isn’t found in the set, we can see that the code gets inlined, and the hash code must be quite good. By enabling the args -XX:+PrintCompilation -XX:+UnlockDiagnosticVMOptions -XX:+PrintInlining it’s clear to see that the entire hash code generation gets inlined into the calling loop, and that equals is never executed (suggesting that there are no collisions to resolve here) .

                              @ 8   java.util.HashSet::contains (9 bytes)   inline (hot)
                                @ 5   java.util.HashMap::containsKey (18 bytes)   inline (hot)
                                 \-> TypeProfile (17571/17571 counts) = java/util/HashMap
                                  @ 2   java.util.HashMap::hash (20 bytes)   inline (hot)
                                    @ 9   com.openkappa.simd.builder.Builder::hashCode (43 bytes)   inline (hot)
                                      @ 8   org.apache.commons.lang3.builder.HashCodeBuilder::<init> (60 bytes)   inline (hot)
                                        @ 1   java.lang.Object::<init> (1 bytes)   inline (hot)
                                        @ 26   org.apache.commons.lang3.Validate::isTrue (18 bytes)   unloaded signature classes
                                        @ 46   org.apache.commons.lang3.Validate::isTrue (18 bytes)   unloaded signature classes
                                      @ 15   org.apache.commons.lang3.builder.HashCodeBuilder::append (58 bytes)   inline (hot)
                                        @ 21   java.lang.Object::getClass (0 bytes)   (intrinsic)
                                        @ 24   java.lang.Class::isArray (0 bytes)   (intrinsic)
                                        @ 49   java.lang.String::hashCode (49 bytes)   inline (hot)
                                          @ 19   java.lang.String::isLatin1 (19 bytes)   inline (hot)
                                          @ 29   java.lang.StringLatin1::hashCode (42 bytes)   inline (hot)
                                      @ 22   org.apache.commons.lang3.builder.HashCodeBuilder::append (17 bytes)   inline (hot)
                                      @ 29   org.apache.commons.lang3.builder.HashCodeBuilder::append (58 bytes)   inline (hot)
                                        @ 21   java.lang.Object::getClass (0 bytes)   (intrinsic)
                                        @ 24   java.lang.Class::isArray (0 bytes)   (intrinsic)
                                        @ 49   java.lang.String::hashCode (49 bytes)   inline (hot)
                                          @ 19   java.lang.String::isLatin1 (19 bytes)   inline (hot)
                                          @ 29   java.lang.StringLatin1::hashCode (42 bytes)   inline (hot)
                                      @ 36   org.apache.commons.lang3.builder.HashCodeBuilder::append (58 bytes)   inline (hot)
                                        @ 21   java.lang.Object::getClass (0 bytes)   (intrinsic)
                                        @ 24   java.lang.Class::isArray (0 bytes)   (intrinsic)
                                        @ 49   java.time.Instant::hashCode (22 bytes)   inline (hot)
                                      @ 39   org.apache.commons.lang3.builder.HashCodeBuilder::toHashCode (5 bytes)   accessor
                                  @ 6   java.util.HashMap::getNode (148 bytes)   inline (hot)
                                    @ 59   com.openkappa.simd.builder.Builder::equals (84 bytes)   never executed
                                    @ 126   com.openkappa.simd.builder.Builder::equals (84 bytes)   never executed

So the code is clearly small enough to get inlined. But what about the builder itself, isn’t it allocated? Not if it doesn’t escape. On a debug JVM, it’s possible to observe the removal of the builder’s allocation the JVM args -XX:+PrintEscapeAnalysis -XX:+PrintEliminateAllocations. However, on a production JVM, it can be observed indirectly by measuring the difference when escape analysis is disabled with -XX:-DoEscapeAnalysis.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: setSize
findMissingBaselineInstance thrpt 1 10 150.972758 5.095163 ops/us 8
findMissingBaselineInstance thrpt 1 10 140.236057 2.222751 ops/us 96
findMissingBaselineInstance thrpt 1 10 132.680494 5.225503 ops/us 1024
findMissingBaselineInstance thrpt 1 10 115.220385 4.232488 ops/us 8192
findMissingBuilderInstance thrpt 1 10 68.869355 4.479944 ops/us 8
findMissingBuilderInstance thrpt 1 10 67.786351 5.980353 ops/us 96
findMissingBuilderInstance thrpt 1 10 71.113062 3.057181 ops/us 1024
findMissingBuilderInstance thrpt 1 10 75.088839 1.592294 ops/us 8192
findPresentBaselineInstance thrpt 1 10 141.425675 2.630898 ops/us 8
findPresentBaselineInstance thrpt 1 10 142.636359 2.854795 ops/us 96
findPresentBaselineInstance thrpt 1 10 143.939199 0.475918 ops/us 1024
findPresentBaselineInstance thrpt 1 10 142.006635 3.098352 ops/us 8192
findPresentBuilderInstance thrpt 1 10 71.546971 6.152584 ops/us 8
findPresentBuilderInstance thrpt 1 10 62.451705 11.896730 ops/us 96
findPresentBuilderInstance thrpt 1 10 68.903442 3.263955 ops/us 1024
findPresentBuilderInstance thrpt 1 10 69.576038 4.047581 ops/us 8192

Without escape analysis, the baseline code hardly slows down, whereas the change in the builder benchmarks is stark. The difference of differences here isolates the cost of allocating the HashCodeBuilder on the heap, though the builder has not been allocated on the stack – it has been replaced by its fields, which are allocated on the stack. Escape analysis is enabled by default, and the key point here is that even if the code looks like it might be slower, it might be as good or better – it depends what the JIT does with it.

How much Algebra does C2 Know? Part 2: Distributivity

In part one of this series of posts, I looked at how important associativity and independence are for fast loops. C2 seems to utilise these properties to generate unrolled and pipelined machine code for loops, achieving higher throughput even in cases where the kernel of the loop is 3x slower according to vendor advertised instruction throughputs. C2 has a weird and wonderful relationship with distributivity, and hints from the programmer can both and help hinder the generation of good quality machine code.

Viability and Correctness

Distributivity is the simple notion of factoring out brackets. Is this, in general, a viable loop rewrite strategy? This can be utilised to transform the method Scale into FactoredScale, both of which perform floating point arithmetic:


    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    @Benchmark
    public double Scale(DoubleData state) {
        double value = 0D;
        double[] data = state.data1;
        for (int i = 0; i < data.length; ++i) {
            value += 3.14159 * data[i];
        }
        return value;
    }

    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    @Benchmark
    public double FactoredScale(DoubleData state) {
        double value = 0D;
        double[] data = state.data1;
        for (int i = 0; i < data.length; ++i) {
            value += data[i];
        }
        return 3.14159 * value;
    }

Running the project at github with the argument --include .*scale.*, there may be a performance gain to be had from this rewrite, but it isn’t clear cut:

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
FactoredScale thrpt 1 10 7.011606 0.274742 ops/ms 100000
FactoredScale thrpt 1 10 0.621515 0.026853 ops/ms 1000000
Scale thrpt 1 10 6.962434 0.240180 ops/ms 100000
Scale thrpt 1 10 0.671042 0.011686 ops/ms 1000000

With the real numbers it would be completely valid, but floating point arithmetic is not associative. Joseph Darcy explains why in this deep dive on floating point semantics. Broken associativity of addition entails broken distributivity of any operation over it, so the two loops are not equivalent, and they give different outputs (e.g. 15662.513298516365 vs 15662.51329851632 for one sample input). The rewrite isn’t correct even for floating point data, so it isn’t an optimisation that could be applied in good faith, except in a very small number of cases. You have to rewrite the loop yourself and figure out if the small but inevitable differences are acceptable.

Counterintuitive Performance

Integer multiplication is distributive over addition, and we can check if C2 does this rewrite by running the same code with 32 bit integer values, for now fixing a scale factor of 10 (which seems like an innocuous value, no?)


    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    @Benchmark
    public int Scale_Int(IntData state) {
        int value = 0;
        int[] data = state.data1;
        for (int i = 0; i < data.length; ++i) {
            value += 10 * data[i];
        }
        return value;
    }

    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    @Benchmark
    public int FactoredScale_Int(IntData state) {
        int value = 0;
        int[] data = state.data1;
        for (int i = 0; i < data.length; ++i) {
            value += data[i];
        }
        return 10 * value;
    }

The results are fascinating:

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
FactoredScale_Int thrpt 1 10 28.339699 0.608075 ops/ms 100000
FactoredScale_Int thrpt 1 10 2.392579 0.506413 ops/ms 1000000
Scale_Int thrpt 1 10 33.335721 0.295334 ops/ms 100000
Scale_Int thrpt 1 10 2.838242 0.448213 ops/ms 1000000

The code is doing thousands more multiplications in less time when the multiplication is not factored out of the loop. So what the devil is going on? Inspecting the assembly for the faster loop is revealing

  0x000001c89e499400: vmovdqu ymm8,ymmword ptr [rbp+r13*4+10h]
  0x000001c89e499407: movsxd  r10,r13d       
  0x000001c89e49940a: vmovdqu ymm9,ymmword ptr [rbp+r10*4+30h]
  0x000001c89e499411: vmovdqu ymm13,ymmword ptr [rbp+r10*4+0f0h]
  0x000001c89e49941b: vmovdqu ymm12,ymmword ptr [rbp+r10*4+50h]
  0x000001c89e499422: vmovdqu ymm4,ymmword ptr [rbp+r10*4+70h]
  0x000001c89e499429: vmovdqu ymm3,ymmword ptr [rbp+r10*4+90h]
  0x000001c89e499433: vmovdqu ymm2,ymmword ptr [rbp+r10*4+0b0h]
  0x000001c89e49943d: vmovdqu ymm0,ymmword ptr [rbp+r10*4+0d0h]
  0x000001c89e499447: vpslld  ymm11,ymm8,1h  
  0x000001c89e49944d: vpslld  ymm1,ymm0,1h   
  0x000001c89e499452: vpslld  ymm0,ymm0,3h   
  0x000001c89e499457: vpaddd  ymm5,ymm0,ymm1 
  0x000001c89e49945b: vpslld  ymm0,ymm2,3h   
  0x000001c89e499460: vpslld  ymm7,ymm3,3h   
  0x000001c89e499465: vpslld  ymm10,ymm4,3h 
  0x000001c89e49946a: vpslld  ymm15,ymm12,3h
  0x000001c89e499470: vpslld  ymm14,ymm13,3h
  0x000001c89e499476: vpslld  ymm1,ymm9,3h  
  0x000001c89e49947c: vpslld  ymm2,ymm2,1h  
  0x000001c89e499481: vpaddd  ymm6,ymm0,ymm2   
  0x000001c89e499485: vpslld  ymm0,ymm3,1h     
  0x000001c89e49948a: vpaddd  ymm7,ymm7,ymm0   
  0x000001c89e49948e: vpslld  ymm0,ymm4,1h     
  0x000001c89e499493: vpaddd  ymm10,ymm10,ymm0
  0x000001c89e499497: vpslld  ymm0,ymm12,1h   
  0x000001c89e49949d: vpaddd  ymm12,ymm15,ymm0
  0x000001c89e4994a1: vpslld  ymm0,ymm13,1h   
  0x000001c89e4994a7: vpaddd  ymm4,ymm14,ymm0 
  0x000001c89e4994ab: vpslld  ymm0,ymm9,1h    
  0x000001c89e4994b1: vpaddd  ymm2,ymm1,ymm0  
  0x000001c89e4994b5: vpslld  ymm0,ymm8,3h    
  0x000001c89e4994bb: vpaddd  ymm8,ymm0,ymm11 
  0x000001c89e4994c0: vphaddd ymm0,ymm8,ymm8  
  0x000001c89e4994c5: vphaddd ymm0,ymm0,ymm3  
  0x000001c89e4994ca: vextracti128 xmm3,ymm0,1h
  0x000001c89e4994d0: vpaddd  xmm0,xmm0,xmm3   
  0x000001c89e4994d4: vmovd   xmm3,ebx         
  0x000001c89e4994d8: vpaddd  xmm3,xmm3,xmm0   
  0x000001c89e4994dc: vmovd   r10d,xmm3        
  0x000001c89e4994e1: vphaddd ymm0,ymm2,ymm2   
  0x000001c89e4994e6: vphaddd ymm0,ymm0,ymm3   
  0x000001c89e4994eb: vextracti128 xmm3,ymm0,1h
  0x000001c89e4994f1: vpaddd  xmm0,xmm0,xmm3   
  0x000001c89e4994f5: vmovd   xmm3,r10d        
  0x000001c89e4994fa: vpaddd  xmm3,xmm3,xmm0   
  0x000001c89e4994fe: vmovd   r11d,xmm3        
  0x000001c89e499503: vphaddd ymm2,ymm12,ymm12  
  0x000001c89e499508: vphaddd ymm2,ymm2,ymm0    
  0x000001c89e49950d: vextracti128 xmm0,ymm2,1h 
  0x000001c89e499513: vpaddd  xmm2,xmm2,xmm0    
  0x000001c89e499517: vmovd   xmm0,r11d         
  0x000001c89e49951c: vpaddd  xmm0,xmm0,xmm2    
  0x000001c89e499520: vmovd   r10d,xmm0         
  0x000001c89e499525: vphaddd ymm0,ymm10,ymm10  
  0x000001c89e49952a: vphaddd ymm0,ymm0,ymm3   
  0x000001c89e49952f: vextracti128 xmm3,ymm0,1h
  0x000001c89e499535: vpaddd  xmm0,xmm0,xmm3
  0x000001c89e499539: vmovd   xmm3,r10d   
  0x000001c89e49953e: vpaddd  xmm3,xmm3,xmm0   
  0x000001c89e499542: vmovd   r11d,xmm3        
  0x000001c89e499547: vphaddd ymm2,ymm7,ymm7   
  0x000001c89e49954c: vphaddd ymm2,ymm2,ymm0   
  0x000001c89e499551: vextracti128 xmm0,ymm2,1h
  0x000001c89e499557: vpaddd  xmm2,xmm2,xmm0 
  0x000001c89e49955b: vmovd   xmm0,r11d      
  0x000001c89e499560: vpaddd  xmm0,xmm0,xmm2 
  0x000001c89e499564: vmovd   r10d,xmm0      
  0x000001c89e499569: vphaddd ymm0,ymm6,ymm6   
  0x000001c89e49956e: vphaddd ymm0,ymm0,ymm3   
  0x000001c89e499573: vextracti128 xmm3,ymm0,1h
  0x000001c89e499579: vpaddd  xmm0,xmm0,xmm3   
  0x000001c89e49957d: vmovd   xmm3,r10d        
  0x000001c89e499582: vpaddd  xmm3,xmm3,xmm0   
  0x000001c89e499586: vmovd   r11d,xmm3        
  0x000001c89e49958b: vphaddd ymm2,ymm5,ymm5   
  0x000001c89e499590: vphaddd ymm2,ymm2,ymm0   
  0x000001c89e499595: vextracti128 xmm0,ymm2,1h
  0x000001c89e49959b: vpaddd  xmm2,xmm2,xmm0
  0x000001c89e49959f: vmovd   xmm0,r11d     
  0x000001c89e4995a4: vpaddd  xmm0,xmm0,xmm2
  0x000001c89e4995a8: vmovd   r10d,xmm0
  0x000001c89e4995ad: vphaddd ymm2,ymm4,ymm4 
  0x000001c89e4995b2: vphaddd ymm2,ymm2,ymm1
  0x000001c89e4995b7: vextracti128 xmm1,ymm2,1h
  0x000001c89e4995bd: vpaddd  xmm2,xmm2,xmm1
  0x000001c89e4995c1: vmovd   xmm1,r10d  
  0x000001c89e4995c6: vpaddd  xmm1,xmm1,xmm2    
  0x000001c89e4995ca: vmovd   ebx,xmm1          

The loop is aggressively unrolled, pipelined, and vectorised. Moreover, the multiplication by ten results not in a multiplication but two left shifts (see VPSLLD) and an addition. Note that x << 1 + x << 3 = x * 10 and C2 seems to know it; this rewrite can be applied because it can be proven statically that the factor is always 10. The “optimised” loop doesn’t vectorise at all (and I have no idea why not – isn’t this a bug? Yes it is.)

  0x000002bbebeda3c8: add     ebx,dword ptr [rbp+r8*4+14h]
  0x000002bbebeda3cd: add     ebx,dword ptr [rbp+r8*4+18h]
  0x000002bbebeda3d2: add     ebx,dword ptr [rbp+r8*4+1ch]
  0x000002bbebeda3d7: add     ebx,dword ptr [rbp+r8*4+20h]
  0x000002bbebeda3dc: add     ebx,dword ptr [rbp+r8*4+24h]
  0x000002bbebeda3e1: add     ebx,dword ptr [rbp+r8*4+28h]
  0x000002bbebeda3e6: add     ebx,dword ptr [rbp+r8*4+2ch]
  0x000002bbebeda3eb: add     r13d,8h           
  0x000002bbebeda3ef: cmp     r13d,r11d         
  0x000002bbebeda3f2: jl      2bbebeda3c0h      
  

This is a special case: data is usually dynamic and variable, so the loop cannot always be proven to be equivalent to a linear combination of bit shifts. The routine is compiled for all possible parameters, not just statically contrived cases like the one above, so you may never see this assembly in the wild. However, even with random factors, the slow looking loop is aggressively optimised in a way the hand “optimised” code is not:


    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    @Benchmark
    public int Scale_Int_Dynamic(ScaleState state) {
        int value = 0;
        int[] data = state.data;
        int factor = state.randomFactor();
        for (int i = 0; i < data.length; ++i) {
            value += factor * data[i];
        }
        return value;
    }

    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    @Benchmark
    public int FactoredScale_Int_Dynamic(ScaleState state) {
        int value = 0;
        int[] data = state.data;
        int factor = state.randomFactor();
        for (int i = 0; i < data.length; ++i) {
            value += data[i];
        }
        return factor * value;
    }

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
FactoredScale_Int_Dynamic thrpt 1 10 26.100439 0.340069 ops/ms 100000
FactoredScale_Int_Dynamic thrpt 1 10 1.918011 0.297925 ops/ms 1000000
Scale_Int_Dynamic thrpt 1 10 30.219809 2.977389 ops/ms 100000
Scale_Int_Dynamic thrpt 1 10 2.314159 0.378442 ops/ms 1000000

Far from seeking to exploit distributivity to reduce the number of multiplication instructions, it seems to almost embrace the extraneous operations as metadata to drive optimisations. The assembly for Scale_Int_Dynamic confirms this (it shows vectorised multiplication, not shifts, within the loop):


  0x000001f5ca2fa200: vmovdqu ymm0,ymmword ptr [r13+r14*4+10h]
  0x000001f5ca2fa207: vpmulld ymm11,ymm0,ymm2   
  0x000001f5ca2fa20c: movsxd  r10,r14d          
  0x000001f5ca2fa20f: vmovdqu ymm0,ymmword ptr [r13+r10*4+30h]
  0x000001f5ca2fa216: vmovdqu ymm1,ymmword ptr [r13+r10*4+0f0h]
  0x000001f5ca2fa220: vmovdqu ymm3,ymmword ptr [r13+r10*4+50h]
  0x000001f5ca2fa227: vmovdqu ymm7,ymmword ptr [r13+r10*4+70h]
  0x000001f5ca2fa22e: vmovdqu ymm6,ymmword ptr [r13+r10*4+90h]
  0x000001f5ca2fa238: vmovdqu ymm5,ymmword ptr [r13+r10*4+0b0h]
  0x000001f5ca2fa242: vmovdqu ymm4,ymmword ptr [r13+r10*4+0d0h]
  0x000001f5ca2fa24c: vpmulld ymm9,ymm0,ymm2    
  0x000001f5ca2fa251: vpmulld ymm4,ymm4,ymm2    
  0x000001f5ca2fa256: vpmulld ymm5,ymm5,ymm2    
  0x000001f5ca2fa25b: vpmulld ymm6,ymm6,ymm2    
  0x000001f5ca2fa260: vpmulld ymm8,ymm7,ymm2    
  0x000001f5ca2fa265: vpmulld ymm10,ymm3,ymm2   
  0x000001f5ca2fa26a: vpmulld ymm3,ymm1,ymm2    
  0x000001f5ca2fa26f: vphaddd ymm1,ymm11,ymm11  
  0x000001f5ca2fa274: vphaddd ymm1,ymm1,ymm0    
  0x000001f5ca2fa279: vextracti128 xmm0,ymm1,1h 
  0x000001f5ca2fa27f: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa283: vmovd   xmm0,ebx          
  0x000001f5ca2fa287: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa28b: vmovd   r10d,xmm0         
  0x000001f5ca2fa290: vphaddd ymm1,ymm9,ymm9    
  0x000001f5ca2fa295: vphaddd ymm1,ymm1,ymm0    
  0x000001f5ca2fa29a: vextracti128 xmm0,ymm1,1h 
  0x000001f5ca2fa2a0: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa2a4: vmovd   xmm0,r10d         
  0x000001f5ca2fa2a9: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa2ad: vmovd   r11d,xmm0         
  0x000001f5ca2fa2b2: vphaddd ymm0,ymm10,ymm10  
  0x000001f5ca2fa2b7: vphaddd ymm0,ymm0,ymm1    
  0x000001f5ca2fa2bc: vextracti128 xmm1,ymm0,1h 
  0x000001f5ca2fa2c2: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa2c6: vmovd   xmm1,r11d         
  0x000001f5ca2fa2cb: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa2cf: vmovd   r10d,xmm1         
  0x000001f5ca2fa2d4: vphaddd ymm1,ymm8,ymm8    
  0x000001f5ca2fa2d9: vphaddd ymm1,ymm1,ymm0    
  0x000001f5ca2fa2de: vextracti128 xmm0,ymm1,1h 
  0x000001f5ca2fa2e4: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa2e8: vmovd   xmm0,r10d         
  0x000001f5ca2fa2ed: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa2f1: vmovd   r11d,xmm0         
  0x000001f5ca2fa2f6: vphaddd ymm0,ymm6,ymm6    
  0x000001f5ca2fa2fb: vphaddd ymm0,ymm0,ymm1    
  0x000001f5ca2fa300: vextracti128 xmm1,ymm0,1h 
  0x000001f5ca2fa306: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa30a: vmovd   xmm1,r11d         
  0x000001f5ca2fa30f: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa313: vmovd   r10d,xmm1         
  0x000001f5ca2fa318: vphaddd ymm1,ymm5,ymm5    
  0x000001f5ca2fa31d: vphaddd ymm1,ymm1,ymm0    
  0x000001f5ca2fa322: vextracti128 xmm0,ymm1,1h 
  0x000001f5ca2fa328: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa32c: vmovd   xmm0,r10d         
  0x000001f5ca2fa331: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa335: vmovd   r11d,xmm0         
  0x000001f5ca2fa33a: vphaddd ymm0,ymm4,ymm4    
  0x000001f5ca2fa33f: vphaddd ymm0,ymm0,ymm1    
  0x000001f5ca2fa344: vextracti128 xmm1,ymm0,1h 
  0x000001f5ca2fa34a: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa34e: vmovd   xmm1,r11d         
  0x000001f5ca2fa353: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa357: vmovd   r10d,xmm1         
  0x000001f5ca2fa35c: vphaddd ymm1,ymm3,ymm3    
  0x000001f5ca2fa361: vphaddd ymm1,ymm1,ymm7    
  0x000001f5ca2fa366: vextracti128 xmm7,ymm1,1h 
  0x000001f5ca2fa36c: vpaddd  xmm1,xmm1,xmm7   
  0x000001f5ca2fa370: vmovd   xmm7,r10d        
  0x000001f5ca2fa375: vpaddd  xmm7,xmm7,xmm1   
  0x000001f5ca2fa379: vmovd   ebx,xmm7         

There are two lessons to be learnt here. The first is that what you see is not what you get. The second is about the correctness of asymptotic analysis. If hierarchical cache renders asymptotic analysis bullshit (linear time but cache friendly algorithms can, and do, outperform logarithmic algorithms with cache misses), optimising compilers render the field practically irrelevant.

Confusing Sets and Lists

I have often seen the roles of lists and sets confused. An application can be brought to its knees – that is, cease to deliver commercial value – if List.contains is called frequently enough on big enough lists. And then there is the workaround… When I moved over to Java from C++ several years ago, it seemed utterly crazy that there was even a Collection interface – exactly what Scott Meier’s Effective STL said not to do. I still think it’s crazy. Sets and lists cannot be substituted, and when you add up the marginal costs, as well as the costs of any compensatory workarounds, confusing them is responsible for a lot of performance bugs. As an application developer, it is part of your job to choose. Here are a few simple examples of when to use each collection type.

Contains

Is an element in the collection?

Never ever do this with a List. This operation is often referred to as being O(n). Which means in your worst case will touch every element in the list (technically, at least once). You have a choice between HashSet and a TreeSet, and both have costs and benefits.

If you choose a HashSet, your best case is O(1): you evaluate a hash code, take its modulus with respect to the size of an array, and look up a bucket containing only one element. The worst case occurs with a degenerate hash code which maps all elements to the same bucket. This is again O(n): you probe a linked list testing each element for equality. On average you get something between these two cases and it depends on the uniformity of your hash code implementation.

If you choose a TreeSet you get a worst case O(log n): this is effectively just a binary search through the nodes in a red black tree. Performance is limited by the cost of the comparator, and suffers systematically from cache misses for large sets (like any kind of pointer chasing, branch prediction and prefetching is difficult if not impossible).

If you’re working with numbers, and small to medium collections, a sorted primitive array may be the best approach, so long as it fits in cache. If you’re working with integers, you can do this in constant time in the worst case by using a BitSet.

Select

What is the value of the element at a given index with respect to a sort order?

This is an obvious use case for a List: it’s O(1) – this is just a lookup at an array offset.

You couldn’t even write the code with a HashSet without copying the data into an intermediate ordered structure, at which point you would probably think again. You see this sort of thing done in code written by inexpensive programmers at large outsourcing consultancies, who were perhaps just under unreasonable pressure to deliver to arbitrary deadlines.

SortedSet, and anything conceptually similar, is the wrong data structure for this operation. The only way to compute this is O(n): you iterate through the set incrementing a counter until you reach the index, and then return the element you’ve iterated to. If you reach the end of the set, you throw. If you do this a lot, you’ll notice.

Rank

How many predecessors does an element have with respect to a sort order?

Another classic operation for List, so long as you keep it sorted. Use Collections.binarySearch to find the index of the element in the collection with complexity O(log n). This is its rank. If you can get away with it, primitive arrays will be much better here, especially if they are small enough to fit in cache.

Once again, there are creativity points available for the solution involving a HashSet, and they do exist in the wild, but a clean looking solution is at least possible with a SortedSet. However, it involves an iteration with another check against an incrementing counter. It’s O(n) and if you do it a lot, you’ll blow your performance profile, so use a sorted list instead.

What if you had the source code?

Is this fundamental or just a limitation of the Collections framework? Maybe if you had the source code you could just make these data structures optimal for every use case, without having to choose the right one? Not without creating a Frankenstein, and not without a lot of memory. Optimisation isn’t free.

How much Algebra does C2 Know? Part 1: Associativity

Making loops execute faster is firmly rooted in algebra, but how much does C2 know or care about? When building a highly optimised query engine, a critical concern is the quality of assembly code generated for loops. There is a lot more to JIT compilation than loop optimisation; inlining, class hierarchy analysis, escape analysis to name but a few. Moreover, everything it does has to be fast since it shares resources with the application itself; it can’t spend time unless it brings a net benefit. Being such a generalist, does C2, the JIT compiler used in server applications, know high school algebra?

Specific knowledge of maths is not always worthwhile to program execution, even when it leads to high performance gains. As a motivating example, there is no way to refer directly to the natural numbers in any programming language I have ever used. For instance, the sum of the first n natural numbers is ((n+1) * n)/2, and most high school students know it. This expression is intuitively much faster to evaluate than the equivalent algorithm:


int sum(int n) {
    int total = 0;
    for (int i = 0; i <= n; ++i) {
        total += i;
    }
    return total;
}

But would this loop rewrite be a worthwhile optimisation? The expression takes about 3.5ns to compute the sum of the first million natural numbers, whereas the loop takes 350µs, so we can conclude that C2 does not know this formula and prefers brute force. I would be aghast if time had been spent on optimisations like this: unless your application spends a lot of time adding up contiguous ranges of natural numbers, the marginal benefit is negligible. If this is what your application does most, you should do it yourself. The possibility of an optimisation doesn’t imply its viability: there needs to be a benefit when considering engineering effort, speed improvement, reliability and ubiquity. While this optimisation fails miserably on the grounds of ubiquity, there’s useful schoolboy maths that C2 does seem to know.

Associativity and Dependencies

Each x86 instruction has a throughput – the number of cycles it takes to complete – and a latency – the number of cycles it takes before the result is available to the next instruction in a chain. These numbers are produced by processor vendors, but there are independent numbers like these from Agner Fog, which also includes more detailed definitions of terms like latency. At first, the latency number feels a bit like a scam: what use is an advertised throughput if we can’t use the result immediately? This is where pipelining comes in: independent instructions can be interleaved. If a loop operation is associative and there are no dependencies between iterations, then it can be unrolled, which enables pipelining. If a loop operation is also commutative, then out of order execution is permitted. Evidence of an unrolled loop suggests that the compiler has realised that an operation is at least associative.

To see this in action it’s necessary to find an associative loop reduction that the compiler can’t vectorise. I took an example from the RoaringBitmap library – computing the cardinality of a bitmap container – which is a perfect example to capture this behaviour, because bit counts cannot be vectorised in Java.


  /**
   * Recomputes the cardinality of the bitmap.
   */
  protected void computeCardinality() {
    this.cardinality = 0;
    for (int k = 0; k < this.bitmap.length; k++) {
      this.cardinality += Long.bitCount(this.bitmap[k]);
    }
  }

we can see evidence of loop unrolling and out of order execution when looking at the assembly code emitted. The popcnt instructions are executed on the array out of order, and do not wait for the addition to the accumulator.

popcnt  r9,qword ptr [rbx+r13*8+10h]

movsxd  r8,r13d

popcnt  r10,qword ptr [rbx+r8*8+28h]

popcnt  r11,qword ptr [rbx+r8*8+18h]

popcnt  rdx,qword ptr [rbx+r8*8+20h]
 
movsxd  r8,r9d

add     r8,rbp

movsxd  r9,edx

To generate this assembly code you can run the project at github with the arguments

--include .*popcnt.* 
--print-assembly

The compiler does a very good job in this case: you can try unrolling the loop yourself, but you can only match performance if you guess the loop stride correctly. It’s impossible to prove a negative proposition, but it’s likely you’ll only make it worse if you try. C2 graduates with flying colours here: it definitely understands associativity and dependence.

The catch with pipelining is that an instruction must always wait for its operands. While the operation is associative, there is no way to reorder the code below.


    private int[] prefixSum(int[] data) {
        int[] result = new int[data.length];
        for (int i = 1; i < result.length; ++i) {
            result[i] = result[i - 1] + data[i];
        }
        return result;
    }

What happens with a prefix sum? There’s no unrolling: you can see the loop control statements have not been removed (look for commands like cmp ebx, inc ebx). The loop is also executed in order because it is sequentially dependent.


  0x000001c21215bbc8: mov     r9d,dword ptr [r8+0ch]  
  0x000001c21215bbcc: mov     ebp,dword ptr [r13+rbx*4+0ch]
  0x000001c21215bbd1: cmp     ebx,r9d           
  0x000001c21215bbd4: jnb     1c21215bce1h      
  0x000001c21215bbda: add     ebp,dword ptr [r8+rbx*4+10h]
  0x000001c21215bbdf: cmp     ebx,edi          

Does this harm performance? add takes 0.33 cycles, whereas popcnt takes 1 cycle per instruction. Shouldn’t a prefix sum be faster to calculate than a population count, on the same length of array and same width of integer? They can be compared head to head (implementing prefix sum for long[] to keep word width constant)

--include .*prefix.PrefixSum.PrefixSumLong|.*popcnt.PopCount.PopCount$

Far from having 3x throughput, the prefix sum is much worse. This is entirely because there is no loop unrolling and no pipelining. When possible, C2 applies aggressive unrolling optimisations unavailable to the programmer. For vectorisable operations (requiring linear independence and countability), loop unrolling further marks the loop as a candidate for auto-vectorisation.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
PopCount thrpt 1 10 9.174499 0.394487 ops/ms 100000
PopCount thrpt 1 10 1.217521 0.513734 ops/ms 1000000
PrefixSumLong thrpt 1 10 6.807279 0.925282 ops/ms 100000
PrefixSumLong thrpt 1 10 0.443974 0.053544 ops/ms 1000000

If the dependencies need to fetch data from RAM the latency can be much higher than loading from registers or from prefetched cache. Even when fetching from RAM, the worst case scenario, during this delay independent instructions can complete, unless they have a false dependency.

Targeting SIMD in Java

Vectorised instruction execution can be targeted directly in C++ but with Java there are extra layers of abstraction to go through. Folklore aside, when does vectorisation or SIMD execution actually happen in Java? Skeptical of old wives’ tales, I investigate when SIMD instructions are actually used in Java 9, and how to disable it by programming badly.

Building a Benchmark Harness

I use JMH to write benchmarks. For the uninitiated, it is the standard Java micro-benchmarking framework and handles various pitfalls, the most salient of which being that it ensures that your code actually executes, and ensures measurement is performed against a monotonic measure of time. Benchmarks produced without JMH are unlikely to be correct and should arouse suspicion, especially if used during a sales process.

Averages always lie, so minimisation of 99.9th percentile execution time is a better objective to have in mind. As a general performance indicator I measure throughput in operations per time unit, which is useful to correlate with JVM compilation events.

To tune performance, before even worrying about achieving SIMD execution, we need to be aware of recompilation and failures to inline. These features are enabled by the arguments below and my simple harness has a specific mode to support these diagnostics:

-XX:+PrintCompilation 
-XX:+UnlockDiagnosticVMOptions 
-XX:+PrintInlining

The proof that code is actually being vectorised is to observe the emission of AVX instructions. To prove this has happened (and if it does, it will be correlated with astonishing performance statistics), we need to see the assembly code so I run the benchmark in a mode that will print out the generated assembly via the arguments:

-XX:+UnlockDiagnosticVMOptions
-XX:+PrintAssembly
-XX:PrintAssemblyOptions=hsdis-print-bytes 
-XX:CompileCommand=print

However, SIMD execution only happens when SuperWord parallelism is enabled (and by default, it is) so we won’t even need to look at the assembly unless we see a clear difference when running the benchmark without the UseSuperWord option:

-XX:-UseSuperWord

What Gains Can Be Expected?

Is vectorised code execution a panacea? Assuming we can force the JVM to use SIMD, how much performance can be expected? It turns out that java.util.Arrays.fill can be vectorised, so we can get a taste of the difference it makes. We can observe its impact by benchmarking throughput with and without SuperWord instruction level parallelism.


    private void benchmark(String... jvmArgs) throws RunnerException {
            Options opt = new OptionsBuilder()
                    .include(this.getClass().getName() + ",*")
                    .mode(Mode.Throughput)
                    .timeUnit(TimeUnit.MILLISECONDS)
                    .warmupIterations(5)
                    .measurementIterations(5)
                    .forks(1)
                    .shouldFailOnError(true)
                    .shouldDoGC(true)
                    .operationsPerInvocation(1_000_000)
                    .jvmArgs(jvmArgs)
                    .build();
            new Runner(opt).run();
    }

    @Benchmark
    public void fillArray(Blackhole bh)  {
        double[] array = new double[1 << 10];
        for(int i = 0; i << 1_000_000; ++i) {
            Arrays.fill(array, i);
            bh.consume(array);
        }
    }

# VM version: JDK 9-ea, VM 9-ea+166
...
# VM options: -XX:-UseSuperWord
...

Benchmark                 Mode  Cnt    Score     Error   Units
TestBenchmark.fillArray  thrpt    5  966.947 ± 596.705  ops/ms

# VM version: JDK 9-ea, VM 9-ea+166
...
# VM options: -XX:+UseSuperWord
...

Benchmark                 Mode  Cnt     Score      Error   Units
TestBenchmark.fillArray  thrpt    5  4230.802 ± 5311.001  ops/ms

The difference in throughput is palpable and astonishing.

Intrinsics

Various intrinsics, such as Arrays.fill above, compile down to vectorised instructions. When SuperWord parallelism is enabled, they will always run faster. The JIT compiler will also target simple hand-written code, but intrinsics are a safer bet.

Adding Arrays Together

Addition of arrays of primitives can be vectorised automatically.


    public double[] add(double[] left, double[] right) {
        double[] result = new double[left.length];
        for(int i = 0; i < left.length && i < right.length; ++i) {
            result[i] = left[i] + right[i];
        }
        return result;
    }

Benchmarking the add method, which has a small code size of 43B, the method is inlined. Throughput is doubled and we see evidence of dynamic optimisation.

# Run progress: 25.00% complete, ETA 00:02:42
# Fork: 1 of 1
# Warmup Iteration   1: 0.372 ops/us
# Warmup Iteration   2: 0.340 ops/us
# Warmup Iteration   3: 0.432 ops/us
# Warmup Iteration   4: 0.497 ops/us
# Warmup Iteration   5: 0.499 ops/us
Iteration   1: 0.398 ops/us
Iteration   2: 0.364 ops/us
Iteration   3: 0.408 ops/us
Iteration   4: 0.544 ops/us
Iteration   5: 0.401 ops/us

This code is twice as fast, and our three nines is better, just by virtue of keeping the code simple.
TestBenchmark.add:add·p0.999             sample       2.164          us/op

But are we getting SIMD execution? Possibly – disabling SuperWord yields noticeably worse results.
# Run progress: 25.00% complete, ETA 00:02:22
# Fork: 1 of 1
# Warmup Iteration   1: 0.261 ops/us
# Warmup Iteration   2: 0.343 ops/us
# Warmup Iteration   3: 0.294 ops/us
# Warmup Iteration   4: 0.320 ops/us
# Warmup Iteration   5: 0.316 ops/us
Iteration   1: 0.293 ops/us
Iteration   2: 0.276 ops/us
Iteration   3: 0.304 ops/us
Iteration   4: 0.291 ops/us
Iteration   5: 0.279 ops/us

It’s worth inspecting the assembly to see if we can observe the emission of AVX instructions once we reenable UseSuperWord. Assembly is easier to read if inlining is disabled, this can be controlled in JMH with this annotation: @CompilerControl(CompilerControl.Mode.DONT_INLINE). Applying the annotation, the assembly is printed using the appropriate JVM args:
-XX:+UnlockDiagnosticVMOptions 
-XX:+PrintAssembly
-XX:PrintAssemblyOptions=hsdis-print-bytes 
-XX:CompileCommand=print

0x00000154edb691e0: vmovdqu ymm0,ymmword ptr [rbp+r10*8+10h]

0x00000154edb691e7: vaddpd  ymm0,ymm0,ymmword ptr [rdx+r10*8+10h]

0x00000154edb691ee: vmovdqu ymmword ptr [r8+r10*8+10h],ymm0

0x00000154edb691f5: add     r10d,4h           

0x00000154edb691f9: cmp     r10d,ebx          

0x00000154edb691fc: jl      154edb691e0h      

It’s true – this code is indeed vectorised – see the AVX instructions in bold!

Ruining It

Having witnessed vectorisation when adding arrays together, let’s try and break it. There are a few patterns we can apply to break our vectorised code:

  • Putting an OR condition as the loop condition
  • Putting a non-inlined method inside the loop
  • Putting an arbitrary method as the loop condition
  • Manually unrolling the loop
  • Using a long as the loop variable
  • Multiple exit points

The list goes on but now let’s really fuck it up. We were using double[] so let’s see what happens if we use a DirectByteBuffer as the backing for a homegrown vector construct instead. Instead of returning a new heap allocated array, we will write our doubles into a byte buffer, and use an offset and length to delineate the arrays. For instance, the code below will write the sum of two arrays stored in a byte buffer back into the same byte buffer. We can abstract vectors by creating small on-heap objects for each array which know the offset and length of each array in the buffer.


public int add(ByteBuffer byteBuffer,
               int leftOffset, int leftLength,
               int rightOffset, int rightLength,
               int resultOffset) {
        int resultIndex = resultOffset;
        for(int l = leftOffset, r = rightOffset;
            l < leftOffset + leftLength && r < rightOffset + rightLength;
            l += 8, r += 8, resultIndex += 8) {
            byteBuffer.putDouble(resultIndex, byteBuffer.getDouble(l) + byteBuffer.getDouble(r));
        }
        return resultIndex;
    }

Is this clever? We’ve beaten the garbage collector. It certainly feels like a clever engineering story. No, this is actually a huge regression.

# Run progress: 0.00% complete, ETA 00:00:40
# Fork: 1 of 1
# Warmup Iteration   1: 0.156 ops/us
# Warmup Iteration   2: 0.160 ops/us
# Warmup Iteration   3: 0.198 ops/us
# Warmup Iteration   4: 0.190 ops/us
# Warmup Iteration   5: 0.272 ops/us
Iteration   1: 0.220 ops/us
Iteration   2: 0.242 ops/us
Iteration   3: 0.216 ops/us
Iteration   4: 0.248 ops/us
Iteration   5: 0.351 ops/us

TestBenchmark.byteBuffer:byteBuffer·p0.999     sample       6.552           us/op

The obvious indicator that this is not being vectorised is that performance does not degrade when setting
-XX:-UseSuperWord

And to be sure we can inspect the assembly code emitted (without disabling UseSuperWord!).
  0x000001869216cb67: mov     edx,ecx

  0x000001869216cb69: add     edx,r9d

  0x000001869216cb6c: cmp     ecx,edx

  0x000001869216cb6e: jnl     1869216cb1dh

  0x000001869216cb70: mov     r13d,dword ptr [r12+r11*8+8h]
                                                
  0x000001869216cb75: cmp     r13d,0f8007ed8h
                                      
  0x000001869216cb7c: jne     1869216d015h

  0x000001869216cb82: lea     r13,[r12+r11*8]

  0x000001869216cb86: movzx   eax,byte ptr [r13+29h]

  0x000001869216cb8b: test    eax,eax

  0x000001869216cb8d: je      1869216d015h    

The reality is that arrays are heavily optimised, and whereas ByteBuffers tend to be dreadfully slow. The bottom line is if you want to exploit instruction level parallelism, then be prepared to write code even a beginner could understand instantly. Bizarre off-heap data structures and vectorised execution? Unlikely.