Limiting Factors in a Dot Product Calculation

The dot product is ubiquitous in computing. The simple calculation is used heavily in neural networks for perceptron activation and again in gradient descent algorithms for error backpropagation. It is one of the building blocks of linear regression. Cosine similarity in document search is yet another dot product.

These use cases are more often implemented in C/C++ than in JVM languages, for reasons of efficiency, but what are the constraints on its computational performance? The combination of the computational simplicity and its streaming nature means the limiting factor in efficient code should be memory bandwidth. This is a good opportunity to look at the raw performance that will be made available with the vector API when it’s released.

Written in Java code since Java 9, the code to calculate a dot product is easy, using the Math.fma intrinsic.

  public float vanilla() {
    float sum = 0f;
    for (int i = 0; i < size; ++i) {
      sum = Math.fma(left[i], right[i], sum);
    }
    return sum;
  }

Despite its simplicity, this code is incredibly inefficient precisely because it’s written in Java. Java is a language which prizes portability, and this sometimes comes at the cost of performance. The only way to make this routine produce the same number given the same input, no matter what operating system or instruction sets are available, is to do the operations in the same order, which means no unrolling or vectorisation. For a web application, this a good trade off, but for data analytics it is not.

An estimate of intensity, assuming a constant processor frequency, and two floating point operations (flops) per FMA, shows that the intensity is constant but very low at 0.67 flops/cycle. There being constant intensity as a function of array size is interesting because it indicates that the performance is insensitive to cache hierarchy, that the the limit is the CPU. Daniel Lemire made this observation with a benchmark written in C, disabling fastmath compiler optimisations, recently.

The JLS’s view on floating point arithmetic is the true limiting factor here. Assuming you really care about dot product performance, the best you can do to opt out is to unroll the loop and get slightly higher throughput.

  public float unrolled() {
    float s0 = 0f;
    float s1 = 0f;
    float s2 = 0f;
    float s3 = 0f;
    float s4 = 0f;
    float s5 = 0f;
    float s6 = 0f;
    float s7 = 0f;
    for (int i = 0; i < size; i += 8) {
      s0 = Math.fma(left[i + 0],  right[i + 0], s0);
      s1 = Math.fma(left[i + 1],  right[i + 1], s1);
      s2 = Math.fma(left[i + 2],  right[i + 2], s2);
      s3 = Math.fma(left[i + 3],  right[i + 3], s3);
      s4 = Math.fma(left[i + 4],  right[i + 4], s4);
      s5 = Math.fma(left[i + 5],  right[i + 5], s5);
      s6 = Math.fma(left[i + 6],  right[i + 6], s6);
      s7 = Math.fma(left[i + 7],  right[i + 7], s7);
    }
    return s0 + s1 + s2 + s3 + s4 + s5 + s6 + s7;
  }

The intensity is about 4x better, but still constant. My Intel Skylake processor is capable of 32 flops/cycle, so this code is clearly still not very efficient, but it’s actually the best you can do with any released version of OpenJDK at the time of writing.

The Vector API

I have been keeping an eye on the Vector API incubating in Project Panama for some time, and have only recently got round to kicking the tires. I wrote some benchmarks earlier in the year but ran into, as one should expect of a project in active development, bugs in FMA and vector box elimination. This limited the value I would get from writing about the benchmarks. These bugs have been fixed for a long time now, and you can start to see the how good this API is going to be.

Here’s a simple implementation which wouldn’t be legal for C2 (or Graal for that matter) to generate from the dot product loop. It relies on an accumulator vector, into which a vector dot product of the next eight elements is FMA’d for each step of the loop.

  public float vector() {
    var sum = YMM_FLOAT.zero();
    for (int i = 0; i < size; i += YMM_FLOAT.length()) {
      var l = YMM_FLOAT.fromArray(left, i);
      var r = YMM_FLOAT.fromArray(right, i);
      sum = l.fma(r, sum);
    }
    return sum.addAll();
  }

This loop can be unrolled, but it seems that this must be done manually for the sake of stability. The unroll below uses four accumulators and results in a huge boost in throughput.

  private float vectorUnrolled() {
    var sum1 = YMM_FLOAT.zero();
    var sum2 = YMM_FLOAT.zero();
    var sum3 = YMM_FLOAT.zero();
    var sum4 = YMM_FLOAT.zero();
    int width = YMM_FLOAT.length();
    for (int i = 0; i < size; i += width * 4) {
      sum1 = YMM_FLOAT.fromArray(left, i).fma(YMM_FLOAT.fromArray(right, i), sum1);
      sum2 = YMM_FLOAT.fromArray(left, i + width).fma(YMM_FLOAT.fromArray(right, i + width), sum2);
      sum3 = YMM_FLOAT.fromArray(left, i + width * 2).fma(YMM_FLOAT.fromArray(right, i + width * 2), sum3);
      sum4 = YMM_FLOAT.fromArray(left, i + width * 3).fma(YMM_FLOAT.fromArray(right, i + width * 3), sum4);
    }
    return sum1.addAll() + sum2.addAll() + sum3.addAll() + sum4.addAll();
  }

This plot doesn’t quite do justice to how large the difference is and you could be forgiven for thinking the performance converges. In fact, presenting the data like this is a great way to mislead people! The absolute difference narrows, but the relative performance is more or less constant. Looking at intensity gives a much better picture and is size invariant (until memory bandwidth is saturated).

The first thing to notice is that the intensity gets nowhere near 32 flops/cycle, and that’s because my chip can’t load data fast enough to keep the two FMA ports busy. Skylake chips can do two loads per cycle, which is enough for one FMA between two vectors and the accumulator. Since the arrays are effectively streamed, there is no chance to reuse any loads, so the absolute maximum intensity is 50% capacity, or just 16 flops/cycle.

In the unrolled vector code, the intensity hits 12 flops/cycle just before 4096 elements. 4096 is a special number because 2 * 4096 * 4 = 32kB is the capacity of L1 cache. This peak and rapid decrease suggests that the code is fast enough to be hitting memory bandwidth: if L1 were larger or L2 were faster, the intensity could be sustained. This is great, and the performance counters available with -prof perfnorm corroborate.

In the vanilla loop and unrolled loop, the cycles per instruction (CPI) reaches a maximum long before the arrays breach L1 cache. The latency of an instruction depends on where its operands come from, increasing the further away from L1 cache the data comes from. If CPI for arrays either side of the magical 4096 element threshold is the same, then memory cannot be the limiting factor. The unrolled vector loop show a very sharp increase, suggesting a strong dependency on load speed. Similarly, L1-dcache-load-misses can be seen to increase sharply once the arrays are no longer L1 resident (predictably) correlated with a drop in intensity only in the vector unrolled implementation. It’s short lived, but the unrolled vector code, albeit with bounds checks disabled, is efficient enough for the CPU not to be the bottleneck.

Take a look at my benchmarks and raw data.. The JDK used was built from the vectorIntrinsics branch of the Project Panama OpenJDK fork, run with JMH 1.20 on Ubuntu 16.04 LTS, on a 4 core i7-6700HQ processor.

Stages

The conventional wisdom is that the stages of a task should be pipelined, so you don’t need to wait for the completion of one stage before the next is started. It surprises me that it seems you can sometimes do better when performing each stage of a pipeline in a short batch. Useful optimisation opportunities can arise from this phenomenon, with only minor code changes. I recently applied this principle to implement fast batch iterators in RoaringBitmap.

I came across a discussion about shuffling arrays on Twitter, stimulated by a blog post by Daniel Lemire. Imagine you want to randomly shuffle the contents of an array. One approach to take would be to iterate over the array in reverse, at each index i, generate a random index j smaller than i, and swap the elements at i and j. Here’s some benchmark code to measure how long this takes for an assortment of swapping strategies, including one where the swaps are just precomputed and looked up in an array.


  @Benchmark
  public void shuffle(Blackhole bh) {
    for (int i = data.length; i > 1; i--)
      swap(data, i - 1, op.applyAsInt(i));
    bh.consume(data);
  }

  private static void swap(int[] arr, int i, int j) {
    arr[i] ^= arr[j];
    arr[j] ^= arr[i];
    arr[i] ^= arr[j];
  }

There is a large difference between the version where the random swap is precomputed and when the swap is computed on the fly with ThreadLocalRandom.nextInt.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: mode Param: size
shuffle thrpt 1 10 2198.459182 274.965189 ops/s THREAD_LOCAL_RANDOM 65536
shuffle thrpt 1 10 1015.796005 16.225480 ops/s THREAD_LOCAL_RANDOM 131072
shuffle thrpt 1 10 7300.732038 46.788234 ops/s PRECOMPUTED 65536
shuffle thrpt 1 10 3828.021096 450.874537 ops/s PRECOMPUTED 131072

The difference is large, but a lot more work is being done when the random indices are computed on the fly. A good measure of efficiency per unit of work is cycles per instruction (CPI). Running the benchmark with -prof perfnorm shows that these benchmarks are at parity for cycles per instruction: if throughput is lower when the random numbers are generated on the fly, it’s because there are more instructions to execute.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: mode Param: size
shuffle:CPI thrpt 1 1 0.427028 NaN #/op THREAD_LOCAL_RANDOM 65536
shuffle:CPI thrpt 1 1 0.447793 NaN #/op THREAD_LOCAL_RANDOM 131072
shuffle:CPI thrpt 1 1 0.477202 NaN #/op PRECOMPUTED 65536
shuffle:CPI thrpt 1 1 0.565153 NaN #/op PRECOMPUTED 131072

Nevertheless, instruction profiling with -prof perfasm shows that the code is qualitatively different when computing the next swapped index is simple. When there is random number generation to do, most of the time is attributed either to mov or just after mov instructions (probably because of profiling skid) during the swap. For example, with the smaller array:


  0.04%    0.00%  │   ││  0x00007fa009c0a8f9: xor    0x10(%rsi,%rdx,4),%r10d  
 15.31%   13.18%  │   ││  0x00007fa009c0a8fe: mov    %r10d,0xc(%rsi,%rcx,4)  
  3.43%    3.05%  │   ││  0x00007fa009c0a903: xor    0x10(%rsi,%rdx,4),%r10d  
  5.37%    5.92%  │   ││  0x00007fa009c0a908: mov    %r10d,0x10(%rsi,%rdx,4)  
  4.15%    4.22%  │   ││  0x00007fa009c0a90d: xor    %r10d,0xc(%rsi,%rcx,4)  
 10.80%    8.80%  │   ││  0x00007fa009c0a912: cmp    $0x1,%r9d ; probably skid

The key difference in the precomputed case is that the loop is unrolled with several isomorphic chains of instructions. None of the loads seem to be quite so expensive according to the sampled frequencies.


  0.08%    0.16%  │      0x00007fda2dc0dfb2: cmp    %r10d,%r9d
                  │      0x00007fda2dc0dfb5: jae    0x00007fda2dc0e264
  0.00%    0.00%  │      0x00007fda2dc0dfbb: xor    0x10(%rdx,%r9,4),%edi
  2.90%    2.89%  │      0x00007fda2dc0dfc0: mov    %edi,0xc(%rdx,%r11,4)
  0.48%    0.33%  │      0x00007fda2dc0dfc5: xor    0x10(%rdx,%r9,4),%edi
  0.45%    0.48%  │      0x00007fda2dc0dfca: mov    %edi,0x10(%rdx,%r9,4)
  0.56%    0.46%  │      0x00007fda2dc0dfcf: xor    %edi,0xc(%rdx,%r11,4)
  4.29%    3.88%  │      0x00007fda2dc0dfd4: mov    0x8(%rdx,%r11,4),%edi
  0.03%    0.01%  │      0x00007fda2dc0dfd9: mov    0x8(%rsi,%r11,4),%r9d
  1.38%    1.46%  │      0x00007fda2dc0dfde: mov    %r11d,%ebx
  0.02%    0.01%  │      0x00007fda2dc0dfe1: add    $0xfffffffe,%ebx   

  0.63%    0.61%  │      0x00007fda2dc0dfe4: cmp    %r10d,%r9d
                  │      0x00007fda2dc0dfe7: jae    0x00007fda2dc0e26f
  0.00%    0.01%  │      0x00007fda2dc0dfed: xor    0x10(%rdx,%r9,4),%edi
  2.60%    2.38%  │      0x00007fda2dc0dff2: mov    %edi,0x8(%rdx,%r11,4)
  0.58%    0.51%  │      0x00007fda2dc0dff7: xor    0x10(%rdx,%r9,4),%edi
  0.90%    0.96%  │      0x00007fda2dc0dffc: mov    %edi,0x10(%rdx,%r9,4)
  0.68%    0.66%  │      0x00007fda2dc0e001: xor    %edi,0x8(%rdx,%r11,4)
  4.85%    4.17%  │      0x00007fda2dc0e006: mov    0x4(%rdx,%r11,4),%edi
  0.01%    0.02%  │      0x00007fda2dc0e00b: mov    0x4(%rsi,%r11,4),%r9d
  1.12%    0.95%  │      0x00007fda2dc0e010: mov    %r11d,%ecx
  0.01%    0.00%  │      0x00007fda2dc0e013: add    $0xfffffffd,%ecx  

  1.02%    1.02%  │      0x00007fda2dc0e016: cmp    %r10d,%r9d
                  │      0x00007fda2dc0e019: jae    0x00007fda2dc0e267
  0.01%    0.01%  │      0x00007fda2dc0e01f: xor    0x10(%rdx,%r9,4),%edi
  2.47%    2.10%  │      0x00007fda2dc0e024: mov    %edi,0x4(%rdx,%r11,4)
  0.69%    0.57%  │      0x00007fda2dc0e029: xor    0x10(%rdx,%r9,4),%edi
  1.37%    1.50%  │      0x00007fda2dc0e02e: mov    %edi,0x10(%rdx,%r9,4)
  0.77%    0.83%  │      0x00007fda2dc0e033: xor    %edi,0x4(%rdx,%r11,4)
  4.28%    3.85%  │      0x00007fda2dc0e038: mov    (%rdx,%r11,4),%edi
  0.03%    0.02%  │      0x00007fda2dc0e03c: mov    (%rsi,%r11,4),%r9d
  1.14%    0.97%  │      0x00007fda2dc0e040: mov    %r11d,%ebx
  0.01%    0.00%  │      0x00007fda2dc0e043: add    $0xfffffffc,%ebx  

With unrolling, some of each chain can take place concurrently, and if there is a cache miss in one chain, it won’t stall the progress of the other chains. Without this capacity for parallelism, a cache miss during the swap will stop all work from progressing. As the probability of a cache miss increases, the cost of the load bottleneck in the swap should increase: this can be stressed by increasing the size of the array. With a large (100M) array, there’s a good chance of a cache miss virtually all the time. CPI increases in both cases, markedly so with the precomputed swaps, but throughput converges: access to main memory has become the bottleneck.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: mode Param: size
shuffle:CPI thrpt 1 1 1.354325 NaN #/op THREAD_LOCAL_RANDOM 100000000
shuffle:CPI thrpt 1 1 3.854150 NaN #/op PRECOMPUTED 100000000

The perfasm output points to the first load in the generated swap as the bottleneck: notice the large cost on the mov instruction.


  0.15%    0.24%  │      ││  0x00007f8405c0a264: cmp    %r9d,%edx
                  │      ││  0x00007f8405c0a267: jae    0x00007f8405c0a350
  0.10%    0.11%  │      ││  0x00007f8405c0a26d: xor    0x10(%r11,%rdx,4),%eax  
 73.97%   63.58%  │      ││  0x00007f8405c0a272: mov    %eax,0xc(%r11,%rcx,4)  
  2.46%    1.87%  │      ││  0x00007f8405c0a277: xor    0x10(%r11,%rdx,4),%eax 
  1.42%    0.67%  │      ││  0x00007f8405c0a27c: mov    %eax,0x10(%r11,%rdx,4) 
  2.19%    1.44%  │      ││  0x00007f8405c0a281: xor    %eax,0xc(%r11,%rcx,4) 
  2.16%    1.37%  │      ││  0x00007f8405c0a286: cmp    $0x1,%edi

With precomputed swaps, there is no single bottleneck, and my intuition is that there is some concurrency, despite the higher CPI. This is a long way from being proven.


 10.33%   11.23%   ││  0x00007fdb35c09250: mov    %r9d,0xc(%rsi,%r10,4)  
  0.41%    0.45%   ││  0x00007fdb35c09255: xor    0x10(%rsi,%r11,4),%r9d  
  0.36%    0.25%   ││  0x00007fdb35c0925a: mov    %r9d,0x10(%rsi,%r11,4)  
  0.42%    0.42%   ││  0x00007fdb35c0925f: xor    %r9d,0xc(%rsi,%r10,4)  
  0.51%    0.66%   ││  0x00007fdb35c09264: mov    0x8(%rsi,%r10,4),%r9d  
  0.03%    0.09%   ││  0x00007fdb35c09269: mov    0x8(%r13,%r10,4),%r11d 
  0.25%    0.20%   ││  0x00007fdb35c0926e: mov    %r10d,%r8d
  0.03%    0.15%   ││  0x00007fdb35c09271: add    $0xfffffffe,%r8d  
  0.19%    0.17%   ││  0x00007fdb35c09275: cmp    %ebx,%r11d
                   ││  0x00007fdb35c09278: jae    0x00007fdb35c09440
  0.02%    0.06%   ││  0x00007fdb35c0927e: xor    0x10(%rsi,%r11,4),%r9d  
 10.40%   10.66%   ││  0x00007fdb35c09283: mov    %r9d,0x8(%rsi,%r10,4) 
  0.41%    0.35%   ││  0x00007fdb35c09288: xor    0x10(%rsi,%r11,4),%r9d 
  0.41%    0.30%   ││  0x00007fdb35c0928d: mov    %r9d,0x10(%rsi,%r11,4) 
  0.45%    0.39%   ││  0x00007fdb35c09292: xor    %r9d,0x8(%rsi,%r10,4)  
  0.48%    0.60%   ││  0x00007fdb35c09297: mov    0x4(%rsi,%r10,4),%r9d  
  0.03%    0.06%   ││  0x00007fdb35c0929c: mov    0x4(%r13,%r10,4),%r11d 
  0.06%    0.11%   ││  0x00007fdb35c092a1: mov    %r10d,%edi
  0.02%    0.16%   ││  0x00007fdb35c092a4: add    $0xfffffffd,%edi   
  0.25%    0.20%   ││  0x00007fdb35c092a7: cmp    %ebx,%r11d

This can be exploited so the random numbers can be generated on the fly without a single bottleneck by using a hybrid approach. The random swaps can be generated on the fly and written into a small buffer. Once the buffer is full, the swaps are done. This should “decouple” the random number generation from the swapping code, and should allow some of the swaps to be performed independently. Concretely:


  @Benchmark
  public void shuffleBuffered(Blackhole bh) {
    for (int i = data.length; i - unroll > 1; i -= unroll) {
      for (int j = 0; j < buffer.length; ++j) {
        buffer[j] = op.applyAsInt(i - j);
      }
      for (int j = 0; j < buffer.length; ++j) {
        swap(data, i - j - 1, buffer[j]);
      }
    }
    bh.consume(data);
  }

There’s not much to be gained (or lost) from this until the array gets quite large, but it’s a relatively interesting outcome. CPI is on the whole improved, and throughput improves as a function of buffer size, so long as the buffer is small.

Mode Benchmark 16 32 64 128 256
PRECOMPUTED shuffle 0.30639 0.296566 0.309829 0.312449 0.311183
PRECOMPUTED shuffle:CPI 3.004183 3.126903 2.989748 2.987508 3.000369
THREAD_LOCAL_RANDOM shuffle 0.271536 0.266418 0.271858 0.265593 0.264507
THREAD_LOCAL_RANDOM shuffle:CPI 1.303454 1.328127 1.300731 1.32857 1.377559
THREAD_LOCAL_RANDOM shuffleBuffered 0.296098 0.324416 0.346934 0.353246 0.35277
THREAD_LOCAL_RANDOM shuffleBuffered:CPI 0.96738 0.937101 0.893673 0.87786 0.874607

Frankly, I can’t think of anything less interesting than scrambling an array, but this observation made me think there may be something in the idea of decoupling stages of work in choice scenarios. After going through the otherwise pointless exercise above, I decided it might be worthwhile spending some time porting a batch iteration feature implemented in the Go version of RoaringBitmap to Java. Confident in the potential advantages of batching, I decided to try porting the feature to Java. This idea turned out to be hugely profitable, speeding up iteration between 2x and 10x. If you use RoaringBitmap, it might be worth switching to these new batch iterators.

This topic is explored in more depth by Daniel Lemire in Fast Random Integer Generation in an Interval.

Collecting Rocks and Benchmarks

As long as I can remember, I have been interested in rocks, I have hundreds of them in storage. Rocks are interesting because they hold little clues about processes nobody has ever seen happen. For instance, one of the first rocks I ever took an interest in was a smooth granite pebble, which I collected on a walk when I was six. Let’s be honest, as a six year old, it was the pattern on the surface of the rock that caught my eye, but this truly was a fascinating rock because it shouldn’t have been smooth and it shouldn’t have been in England. It probably came from Norway, and while it’s possible that a Norwegian brought the rock to England, it’s highly unlikely.

The best possible explanation of the rock’s existence in that point in space and time was that there was once a glacier covering Norway, the North Sea, and Northern England, moving in that direction. The glacier carried the pebble, and many others like it, and dumped it as glacial outwash in the English Midlands. Nobody alive saw this happen, but if this happened, Scotland (relieved of the weight of an ice sheet) should still be rising in altitude, and there should be clay in the Midlands but not in the South. It’s likely that there was an ice age, not just because I found a granite pebble, but because Scotland is rising, and the English Midlands are covered in clay. You may lack the tools, funds, or time to do so, but you can apply this process to virtually anything to figure out how something happened or works.

If you’re an application developer, as I am, it’s highly unlikely that you wrote the platform you use, so you probably don’t really understand it. The vast majority of the platform you use was written by other people over what might as well be geological timescales. Benchmarks are a lot like rocks in that they can reveal implementation details you aren’t otherwise party to, and, with the right trade offs, you may be able to harness these in your applications. Collecting rocks doesn’t make a geologist, and I think you need to be quite inquisitive for benchmarking to work out, because you need to seek corroborating evidence: just as the presence of a pebble is scant support for an ice age, a single benchmark score doesn’t say much about anything.

If you’re interested in the JVM, I think instruction profiling is essential because it gives so much away. For instance, you may not appreciate the significance of choice of garbage collector, but you’ll see lots of strange instructions in some benchmarks if you profile them, and you may have the curiosity to ask what they do and what put them there. If you don’t do it, you won’t really know the boundaries of validity of whatever your observation is. You could find that changing your garbage collector invalidates your findings.

Partly because I want the information on this site to be basically correct, but also to illustrate how conclusions can be jumped to because a benchmark score confirms a personal bias, I’ll look again at a couple of very superficial benchmarks I wrote about last year, to do with polynomial hash codes. The measurements were probably fine, but I didn’t really get to the bottom of the issue and I could easily have found a faster implementation if I had looked a bit harder.

Polynomial Hash Codes

The Java String.hashCode and Arrays.hashCode methods are implemented as the dot product of the data and a vector of powers of 31.

    // Arrays.java
    public static int hashCode(int a[]) {
        if (a == null)
            return 0;

        int result = 1;
        for (int element : a)
            result = 31 * result + element;

        return result;
    }

Since String caches its hash code, you’d be hard pressed to make use of the following optimisations, but I found I learned something about what C2 does and doesn’t do by attempting to optimise it. First of all, what does the built in hash code actually do? You can only really find out with instruction profiling, which you can enable in JMH with -prof perfasm on Linux or -prof xperfasm on Windows.

The bulk of the sampled instructions are in this block below:


           0x000002720f204af2: add     edi,dword ptr [rax+rbx*4+10h]
  2.37%    0x000002720f204af6: mov     r10d,edi
  1.72%    0x000002720f204af9: shl     r10d,5h
  2.06%    0x000002720f204afd: sub     r10d,edi
  3.77%    0x000002720f204b00: add     r10d,dword ptr [rax+rbx*4+14h]
  3.93%    0x000002720f204b05: mov     r8d,r10d
  0.01%    0x000002720f204b08: shl     r8d,5h
  3.80%    0x000002720f204b0c: sub     r8d,r10d
  3.98%    0x000002720f204b0f: add     r8d,dword ptr [rax+rbx*4+18h]
  4.12%    0x000002720f204b14: mov     r10d,r8d
           0x000002720f204b17: shl     r10d,5h
  3.78%    0x000002720f204b1b: sub     r10d,r8d
  3.75%    0x000002720f204b1e: add     r10d,dword ptr [rax+rbx*4+1ch]
  3.81%    0x000002720f204b23: mov     r8d,r10d
           0x000002720f204b26: shl     r8d,5h
  4.04%    0x000002720f204b2a: sub     r8d,r10d
  4.15%    0x000002720f204b2d: add     r8d,dword ptr [rax+rbx*4+20h]
  3.98%    0x000002720f204b32: mov     r10d,r8d
           0x000002720f204b35: shl     r10d,5h
  4.27%    0x000002720f204b39: sub     r10d,r8d
  3.95%    0x000002720f204b3c: add     r10d,dword ptr [rax+rbx*4+24h]
  3.77%    0x000002720f204b41: mov     r8d,r10d
           0x000002720f204b44: shl     r8d,5h
  4.01%    0x000002720f204b48: sub     r8d,r10d
  4.02%    0x000002720f204b4b: add     r8d,dword ptr [rax+rbx*4+28h]
  4.11%    0x000002720f204b50: mov     ecx,r8d
           0x000002720f204b53: shl     ecx,5h
  4.08%    0x000002720f204b56: sub     ecx,r8d
  4.31%    0x000002720f204b59: add     ecx,dword ptr [rax+rbx*4+2ch]

The first thing to ask is where is the multiplication? There is no multiplication, it’s been replaced by a left shift and a subtraction.


    public int StrengthReduction() {
        int result = 1;
        for (int i = 0; i < data.length; ++i) {
            result = (result << 5) - result + data[i];
        }
        return result;
    }

This is the compiler trying to be helpful because shifts and subtractions are cheaper than multiplications, and for 31, this transformation is possible. The snippet is one long chain of instructions: notice the register dependencies in the assembly snippet:


  4.15%    0x000002720f204b2d: add     r8d,dword ptr [rax+rbx*4+20h]
  3.98%    0x000002720f204b32: mov     r10d,r8d
           0x000002720f204b35: shl     r10d,5h
  4.27%    0x000002720f204b39: sub     r10d,r8d

The addition must complete before the contents of r8d are available for the move, the left shift waits for the move, and the subtraction waits for the shift. No two elements of the array are ever processed simultaneously. First suggested by Peter Levart, I came across it on Daniel Lemire’s blog, the dependency can be broken by manually unrolling the loop:


    @Benchmark
    public int Unrolled() {
        if (data == null)
            return 0;

        int result = 1;
        int i = 0;
        for (; i + 7 < data.length; i += 8) {
            result = 31 * 31 * 31 * 31 * 31 * 31 * 31 * 31 * result
                   + 31 * 31 * 31 * 31 * 31 * 31 * 31 * data[i]
                   + 31 * 31 * 31 * 31 * 31 * 31 * data[i + 1]
                   + 31 * 31 * 31 * 31 * 31 * data[i + 2]
                   + 31 * 31 * 31 * 31 * data[i + 3]
                   + 31 * 31 * 31 * data[i + 4]
                   + 31 * 31 * data[i + 5]
                   + 31 * data[i + 6]
                   + data[i + 7]
                    ;
        }
        for (; i < data.length; i++) {
            result = 31 * result + data[i];
        }
        return result;
    }

Weirdly, this implementation does very well (this isn’t new: there has been a ticket for it for several years). Without even bothering with a throughput score (the money shot comes at the end), the profiling output shows that this must be much better. The assembly is quite hard to read because it’s full of tricks I didn’t know existed, but look out for the hexadecimal constants and convince yourself that several are simply powers of 31. The multiplication by 31 is strength reduced to a left shift and a subtraction again.


  0.26%    0x000001d67bdd3c8e: mov     r8d,94446f01h
  0.01%    0x000001d67bdd3c94: jmp     1d67bdd3cb1h
           0x000001d67bdd3c96: nop     word ptr [rax+rax+0h]
           0x000001d67bdd3ca0: mov     edi,r11d
  0.03%    0x000001d67bdd3ca3: vmovq   r14,xmm0
  0.42%    0x000001d67bdd3ca8: mov     ebp,dword ptr [rsp+70h]
  7.14%    0x000001d67bdd3cac: vmovq   rbx,xmm1          
  0.01%    0x000001d67bdd3cb1: cmp     edi,r9d
           0x000001d67bdd3cb4: jnb     1d67bdd3d6ch
  0.04%    0x000001d67bdd3cba: imul    r10d,dword ptr [rcx+rdi*4+10h],67e12cdfh ; Another strength reduction trick
  7.74%    0x000001d67bdd3cc3: add     r10d,r8d                                 ; 1742810335 * x + 2487512833
  2.69%    0x000001d67bdd3cc6: mov     r11d,edi                                 
  0.09%    0x000001d67bdd3cc9: add     r11d,7h                                  
  0.46%    0x000001d67bdd3ccd: cmp     r11d,r9d                             
           0x000001d67bdd3cd0: jnb     1d67bdd3dbah
  6.82%    0x000001d67bdd3cd6: vmovq   xmm1,rbx
  0.61%    0x000001d67bdd3cdb: mov     dword ptr [rsp+70h],ebp
  0.06%    0x000001d67bdd3cdf: vmovq   xmm0,r14          
  0.46%    0x000001d67bdd3ce4: mov     r14,qword ptr [r15+70h]
  6.60%    0x000001d67bdd3ce8: mov     r11d,edi
  0.67%    0x000001d67bdd3ceb: add     r11d,8h 
  0.04%    0x000001d67bdd3cef: movsxd  rax,edi
  0.41%    0x000001d67bdd3cf2: add     edi,0fh
  6.87%    0x000001d67bdd3cf5: mov     edx,dword ptr [rcx+rax*4+28h]
  0.68%    0x000001d67bdd3cf9: imul    r8d,dword ptr [rcx+rax*4+14h],34e63b41h ; multiply by 887503681
  0.67%    0x000001d67bdd3d02: add     r8d,r10d   ; --------------------------
  7.30%    0x000001d67bdd3d05: mov     r10d,edx   ; Multiply by 31
  0.63%    0x000001d67bdd3d08: shl     r10d,5h    ;
  0.08%    0x000001d67bdd3d0c: sub     r10d,edx   ; --------------------------
  0.73%    0x000001d67bdd3d0f: imul    edx,dword ptr [rcx+rax*4+24h],3c1h     ; multiply by 961
  7.47%    0x000001d67bdd3d17: imul    ebp,dword ptr [rcx+rax*4+20h],745fh    ; multiply by 29791 
  0.56%    0x000001d67bdd3d1f: imul    esi,dword ptr [rcx+rax*4+1ch],0e1781h  ; multiply by 923521 
  7.02%    0x000001d67bdd3d27: imul    ebx,dword ptr [rcx+rax*4+18h],1b4d89fh ; multiply by 28629151 
  0.57%    0x000001d67bdd3d2f: add     r8d,ebx
  6.99%    0x000001d67bdd3d32: add     r8d,esi
  0.66%    0x000001d67bdd3d35: add     r8d,ebp
  0.14%    0x000001d67bdd3d38: add     r8d,edx
  0.91%    0x000001d67bdd3d3b: add     r8d,r10d
  7.04%    0x000001d67bdd3d3e: add     r8d,dword ptr [rcx+rax*4+2ch] ; add the data value at offset 7
  1.73%    0x000001d67bdd3d43: test    dword ptr [r14],eax  
  0.06%    0x000001d67bdd3d46: cmp     edi,r9d
           0x000001d67bdd3d49: jnl     1d67bdd3c15h      
  0.45%    0x000001d67bdd3d4f: imul    r8d,r8d,94446f01h ; multiply by 2487512833 (coprime to 31, follow r8d backwards)
 11.90%    0x000001d67bdd3d56: cmp     r11d,r9d

It’s probably not worth deciphering all the tricks in the code above, but notice that there is a lot of parallelism: the chain of signed multiplications target different registers and are independent. This code is much faster.

I wrote the code below in July last year to try to parallelise this calculation. At the expense of precomputing the powers of 31 up to some fixed length, such as the maximum length of strings in your database, it’s quite fast.


    private final int[] coefficients;

    public FixedLengthHashCode(int maxLength) {
        this.coefficients = new int[maxLength + 1];
        coefficients[0] = 1;
        for (int i = 1; i <= maxLength; ++i) {
            coefficients[i] = 31 * coefficients[i - 1];
        }
    }

    public int hashCode(int[] value) {
        final int max = value.length;
        int result = coefficients[max];
        for (int i = 0; i < value.length && i < coefficients.length - 1; ++i) {
            result += coefficients[max - i - 1] * value[i];
        }
        return result;
    }

I was non-commital in the original post but I sort-of claimed this code was vectorised without even bothering to look at the disassembly. It’s scalar, but it’s much more parallel than the unrolled version, and all the clever strength reductions and dependencies are gone.


  0.15%    0x0000022d8e6825e0: movsxd  rbx,ecx
  0.07%    0x0000022d8e6825e3: mov     rdx,rsi
  3.57%    0x0000022d8e6825e6: sub     rdx,rbx
  0.08%    0x0000022d8e6825e9: mov     r10d,dword ptr [r9+rbx*4+10h]
  0.18%    0x0000022d8e6825ee: imul    r10d,dword ptr [rdi+rdx*4+0ch]
  0.15%    0x0000022d8e6825f4: add     r10d,r8d
  4.25%    0x0000022d8e6825f7: mov     r11d,dword ptr [r9+rbx*4+14h]
  0.14%    0x0000022d8e6825fc: imul    r11d,dword ptr [rdi+rdx*4+8h]
  0.19%    0x0000022d8e682602: add     r11d,r10d
  1.31%    0x0000022d8e682605: mov     r10d,dword ptr [r9+rbx*4+18h]
  3.93%    0x0000022d8e68260a: imul    r10d,dword ptr [rdi+rdx*4+4h]
  0.22%    0x0000022d8e682610: add     r10d,r11d
  0.94%    0x0000022d8e682613: mov     r8d,dword ptr [r9+rbx*4+1ch]
  0.05%    0x0000022d8e682618: imul    r8d,dword ptr [rdi+rdx*4]
  3.68%    0x0000022d8e68261d: add     r8d,r10d
  1.02%    0x0000022d8e682620: mov     r10d,dword ptr [r9+rbx*4+20h]
  0.19%    0x0000022d8e682625: imul    r10d,dword ptr [rdi+rdx*4+0fffffffffffffffch]
  0.61%    0x0000022d8e68262b: add     r10d,r8d
  4.71%    0x0000022d8e68262e: mov     r11d,dword ptr [r9+rbx*4+24h]
  0.08%    0x0000022d8e682633: imul    r11d,dword ptr [rdi+rdx*4+0fffffffffffffff8h]
  0.82%    0x0000022d8e682639: add     r11d,r10d
  5.57%    0x0000022d8e68263c: mov     r10d,dword ptr [r9+rbx*4+28h]
  0.04%    0x0000022d8e682641: imul    r10d,dword ptr [rdi+rdx*4+0fffffffffffffff4h]
  0.68%    0x0000022d8e682647: add     r10d,r11d
  4.67%    0x0000022d8e68264a: mov     r11d,dword ptr [r9+rbx*4+2ch]
  0.08%    0x0000022d8e68264f: imul    r11d,dword ptr [rdi+rdx*4+0fffffffffffffff0h]
  0.45%    0x0000022d8e682655: add     r11d,r10d
  4.50%    0x0000022d8e682658: mov     r10d,dword ptr [r9+rbx*4+30h]
  0.20%    0x0000022d8e68265d: imul    r10d,dword ptr [rdi+rdx*4+0ffffffffffffffech]
  0.37%    0x0000022d8e682663: add     r10d,r11d
  3.82%    0x0000022d8e682666: mov     r8d,dword ptr [r9+rbx*4+34h]
  0.05%    0x0000022d8e68266b: imul    r8d,dword ptr [rdi+rdx*4+0ffffffffffffffe8h]
  0.42%    0x0000022d8e682671: add     r8d,r10d
  4.18%    0x0000022d8e682674: mov     r10d,dword ptr [r9+rbx*4+38h]
  0.02%    0x0000022d8e682679: imul    r10d,dword ptr [rdi+rdx*4+0ffffffffffffffe4h]
  0.25%    0x0000022d8e68267f: add     r10d,r8d
  5.11%    0x0000022d8e682682: mov     r11d,dword ptr [r9+rbx*4+3ch]
  0.03%    0x0000022d8e682687: imul    r11d,dword ptr [rdi+rdx*4+0ffffffffffffffe0h]
  0.28%    0x0000022d8e68268d: add     r11d,r10d
  4.88%    0x0000022d8e682690: mov     r10d,dword ptr [r9+rbx*4+40h]
  0.09%    0x0000022d8e682695: imul    r10d,dword ptr [rdi+rdx*4+0ffffffffffffffdch]
  0.21%    0x0000022d8e68269b: add     r10d,r11d
  4.56%    0x0000022d8e68269e: mov     r8d,dword ptr [r9+rbx*4+44h]
  0.02%    0x0000022d8e6826a3: imul    r8d,dword ptr [rdi+rdx*4+0ffffffffffffffd8h]
  0.18%    0x0000022d8e6826a9: add     r8d,r10d
  4.73%    0x0000022d8e6826ac: mov     r10d,dword ptr [r9+rbx*4+48h]
  0.06%    0x0000022d8e6826b1: imul    r10d,dword ptr [rdi+rdx*4+0ffffffffffffffd4h]
  0.10%    0x0000022d8e6826b7: add     r10d,r8d
  4.12%    0x0000022d8e6826ba: mov     r8d,dword ptr [r9+rbx*4+4ch]

That blog post really was lazy. There’s a bit of a problem with the access pattern because the coefficients are accessed in reverse order, and at an offset: it’s too complicated for the optimiser. The code below is just a dot product and it should come as no surprise that it’s faster.


    private int[] coefficients;
    private int seed;

    void init(int size) {
        coefficients = new int[size]; 
        coefficients[size - 1] = 1;
        for (int i = size - 2; i >= 0; --i) {
            coefficients[i] = 31 * coefficients[i + 1];
        }
        seed = 31 * coefficients[0];
    }

    @Benchmark
    public int Vectorised() {
        int result = seed;
        for (int i = 0; i < data.length && i < coefficients.length; ++i) {
            result += coefficients[i] * data[i];
        }
        return result;
    }

The code vectorises, but the reduction isn’t as good as it could be with handwritten code.


  0.22%    0x000001d9c2e0f320: vmovdqu ymm0,ymmword ptr [rdi+rsi*4+70h]
  2.31%    0x000001d9c2e0f326: vpmulld ymm0,ymm0,ymmword ptr [r11+rsi*4+70h]
  0.61%    0x000001d9c2e0f32d: vmovdqu ymm1,ymmword ptr [rdi+rsi*4+50h]
  2.61%    0x000001d9c2e0f333: vpmulld ymm9,ymm1,ymmword ptr [r11+rsi*4+50h]
  0.53%    0x000001d9c2e0f33a: vmovdqu ymm1,ymmword ptr [rdi+rsi*4+30h]
  2.07%    0x000001d9c2e0f340: vpmulld ymm10,ymm1,ymmword ptr [r11+rsi*4+30h]
  0.60%    0x000001d9c2e0f347: vmovdqu ymm1,ymmword ptr [rdi+rsi*4+10h]
  2.33%    0x000001d9c2e0f34d: vpmulld ymm11,ymm1,ymmword ptr [r11+rsi*4+10h]
  0.61%    0x000001d9c2e0f354: vphaddd ymm7,ymm11,ymm11
  3.04%    0x000001d9c2e0f359: vphaddd ymm7,ymm7,ymm8
  3.56%    0x000001d9c2e0f35e: vextracti128 xmm8,ymm7,1h
  0.53%    0x000001d9c2e0f364: vpaddd  xmm7,xmm7,xmm8
  1.56%    0x000001d9c2e0f369: vmovd   xmm8,r8d
  1.77%    0x000001d9c2e0f36e: vpaddd  xmm8,xmm8,xmm7
  0.93%    0x000001d9c2e0f372: vmovd   edx,xmm8
  0.27%    0x000001d9c2e0f376: vphaddd ymm2,ymm10,ymm10
  2.75%    0x000001d9c2e0f37b: vphaddd ymm2,ymm2,ymm6
  2.32%    0x000001d9c2e0f380: vextracti128 xmm6,ymm2,1h
  1.95%    0x000001d9c2e0f386: vpaddd  xmm2,xmm2,xmm6
  0.63%    0x000001d9c2e0f38a: vmovd   xmm6,edx
  0.50%    0x000001d9c2e0f38e: vpaddd  xmm6,xmm6,xmm2
  7.76%    0x000001d9c2e0f392: vmovd   edx,xmm6
  0.22%    0x000001d9c2e0f396: vphaddd ymm5,ymm9,ymm9
  2.68%    0x000001d9c2e0f39b: vphaddd ymm5,ymm5,ymm1
  0.34%    0x000001d9c2e0f3a0: vextracti128 xmm1,ymm5,1h
  6.27%    0x000001d9c2e0f3a6: vpaddd  xmm5,xmm5,xmm1
  0.88%    0x000001d9c2e0f3aa: vmovd   xmm1,edx
  0.92%    0x000001d9c2e0f3ae: vpaddd  xmm1,xmm1,xmm5
  7.85%    0x000001d9c2e0f3b2: vmovd   edx,xmm1
  0.43%    0x000001d9c2e0f3b6: vphaddd ymm4,ymm0,ymm0
  2.59%    0x000001d9c2e0f3bb: vphaddd ymm4,ymm4,ymm3
  0.34%    0x000001d9c2e0f3c0: vextracti128 xmm3,ymm4,1h
  5.72%    0x000001d9c2e0f3c6: vpaddd  xmm4,xmm4,xmm3
  0.80%    0x000001d9c2e0f3ca: vmovd   xmm3,edx
  0.58%    0x000001d9c2e0f3ce: vpaddd  xmm3,xmm3,xmm4
  8.09%    0x000001d9c2e0f3d2: vmovd   r8d,xmm3

Using JDK9, this results in a 3x throughput gain over the built in Arrays.hashCode, and that includes the cost of doubling the number of bytes to process and a suboptimal reduction phase. This is going to be a prime candidate for the Vector API, where a vector of powers of 31 could be multiplied by 31^8 on each iteration, before multiplying by a vector of the next 8 data elements.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
BuiltIn thrpt 1 10 3.439980 0.024231 ops/us 256
BuiltIn thrpt 1 10 0.842664 0.009019 ops/us 1024
BuiltIn thrpt 1 10 0.103638 0.003070 ops/us 8192
FixedLength thrpt 1 10 7.421648 0.169558 ops/us 256
FixedLength thrpt 1 10 1.966116 0.044398 ops/us 1024
FixedLength thrpt 1 10 0.249666 0.009994 ops/us 8192
StrengthReduction thrpt 1 10 3.417248 0.045733 ops/us 256
StrengthReduction thrpt 1 10 0.840830 0.011091 ops/us 1024
StrengthReduction thrpt 1 10 0.104265 0.001537 ops/us 8192
Unrolled thrpt 1 10 6.353271 0.042330 ops/us 256
Unrolled thrpt 1 10 1.672845 0.035389 ops/us 1024
Unrolled thrpt 1 10 0.205575 0.009375 ops/us 8192
Vectorised thrpt 1 10 10.958270 0.109993 ops/us 256
Vectorised thrpt 1 10 2.948918 0.081830 ops/us 1024
Vectorised thrpt 1 10 0.358819 0.027316 ops/us 8192

Floating Point: Manual Unrolling or Autovectorisation?

Java is very strict about floating point arithmetic. There’s even a keyword, strictfp, which allows you to make it stricter, ensuring you’ll get a potentially less precise but identical result wherever you run your program. There’s actually a JEP to make this the only behaviour. JLS 15.18.2 states clearly that floating point addition is not associative in Java, which means that JIT compilers have to respect the order of double[]s and float[]s when compiling code, even if it turns out the order is actually arbitrary to the application. This means they can’t vectorise or even pipeline loops containing interdependent floating point addition, can’t distribute multiplications over additions, can’t telescopically collapse multiplications: the list goes on. If you want this code to go faster you must work around the JIT compiler somehow. In C++, it’s possible to choose to treat floating point numbers as if they had the algebraic properties of the Reals. Using this option is often maligned, perhaps because it assumes much more than associativity, and applies to entire compilation units. A proposal for fastfp semantics at a class and method scope was withdrawn a long time ago.

Prior to the arrival of the Vector API, I’m interested in which vectorisation transformations are possible automatically. This is because I’ve found that many bottlenecks in applications are related to low single threaded performance, and others come from the premature usage of threads to solve these performance problems. Imagine how powerful multithreading could be if used only after saturating the intensity available on each core?

There are certain things you just can’t do in Java because of the language specification, and one of these is getting C2 to pipeline two or more dependent vpaddd instructions, so the maximum achievable floating point intensity is quite low. In fact, you can be better off giving up on autovectorisation and unrolling the loop yourself, allowing the pipelining of scalar additions. This depends intimately on your microarchitecture.

Double Precision Sum Product

C2 can automatically vectorise a sum product between two double[]s. It does this by issuing eight vmovdqu loads at once, then four vpmulpd multiplications at once, and then a long in-order scalar reduction. It does this with the simplest possible code, potentially rewarding simplicity with decent performance:


  @Benchmark
  public double vectorisedDoubleSumProduct() {
    double sp = 0D;
    for (int i = 0; i < xd.length && i < yd.length; ++i) {
      sp += xd[i] * yd[i];
    }
    return sp;
  }

Perfasm shows the problematic scalar reduction quite clearly:


....[Hottest Region 1]..............................................................................
c2, com.openkappa.simd.sumproduct.generated.SumProduct_vectorisedDoubleSumProduct_jmhTest::vectorisedDoubleSumProduct_thrpt_jmhStub, version 164 (238 bytes) 

           0x00000144f724f8e9: mov     r8d,r9d
           0x00000144f724f8ec: add     r8d,0fffffff1h
           0x00000144f724f8f0: cmp     r9d,r8d
           0x00000144f724f8f3: mov     edx,80000000h
           0x00000144f724f8f8: cmovl   r8d,edx
           0x00000144f724f8fc: cmp     r11d,r8d
           0x00000144f724f8ff: jnl     144f724f7dfh
           0x00000144f724f905: nop     word ptr [rax+rax+0h]  ;*iload_3 {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::vectorisedDoubleSumProduct@13 (line 43)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_vectorisedDoubleSumProduct_jmhTest::vectorisedDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  0.00%    0x00000144f724f910: vmovdqu ymm1,ymmword ptr [rsi+r11*8+70h]
  1.07%    0x00000144f724f917: vmovdqu ymm2,ymmword ptr [rax+r11*8+70h]
  2.49%    0x00000144f724f91e: vmovdqu ymm3,ymmword ptr [rsi+r11*8+50h]
  0.67%    0x00000144f724f925: vmovdqu ymm4,ymmword ptr [rax+r11*8+50h]
  0.75%    0x00000144f724f92c: vmovdqu ymm5,ymmword ptr [rsi+r11*8+30h]
  0.01%    0x00000144f724f933: vmovdqu ymm6,ymmword ptr [rax+r11*8+30h]
  1.52%    0x00000144f724f93a: vmovdqu ymm7,ymmword ptr [rsi+r11*8+10h]
                                                         ;*daload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::vectorisedDoubleSumProduct@34 (line 44)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_vectorisedDoubleSumProduct_jmhTest::vectorisedDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  0.01%    0x00000144f724f941: vmovdqu ymm8,ymmword ptr [rax+r11*8+10h]
           0x00000144f724f948: vmulpd  ymm9,ymm2,ymm1
  0.02%    0x00000144f724f94c: vmulpd  ymm7,ymm8,ymm7
  1.50%    0x00000144f724f950: vmulpd  ymm8,ymm4,ymm3
  0.02%    0x00000144f724f954: vmulpd  ymm10,ymm6,ymm5
  0.00%    0x00000144f724f958: vaddsd  xmm0,xmm0,xmm7
  1.51%    0x00000144f724f95c: vpshufd xmm1,xmm7,0eh
  0.00%    0x00000144f724f961: vaddsd  xmm0,xmm0,xmm1
  5.91%    0x00000144f724f965: vextractf128 xmm4,ymm7,1h
  0.01%    0x00000144f724f96b: vaddsd  xmm0,xmm0,xmm4
  6.56%    0x00000144f724f96f: vpshufd xmm1,xmm4,0eh
  0.01%    0x00000144f724f974: vaddsd  xmm0,xmm0,xmm1
  5.75%    0x00000144f724f978: vaddsd  xmm0,xmm0,xmm10
  5.71%    0x00000144f724f97d: vpshufd xmm4,xmm10,0eh
  0.00%    0x00000144f724f983: vaddsd  xmm0,xmm0,xmm4
  5.89%    0x00000144f724f987: vextractf128 xmm6,ymm10,1h
  0.01%    0x00000144f724f98d: vaddsd  xmm0,xmm0,xmm6
  6.01%    0x00000144f724f991: vpshufd xmm4,xmm6,0eh
  0.01%    0x00000144f724f996: vaddsd  xmm0,xmm0,xmm4
  5.95%    0x00000144f724f99a: vaddsd  xmm0,xmm0,xmm8
  5.98%    0x00000144f724f99f: vpshufd xmm1,xmm8,0eh
  0.01%    0x00000144f724f9a5: vaddsd  xmm0,xmm0,xmm1
  5.99%    0x00000144f724f9a9: vextractf128 xmm5,ymm8,1h
  0.00%    0x00000144f724f9af: vaddsd  xmm0,xmm0,xmm5
  5.93%    0x00000144f724f9b3: vpshufd xmm1,xmm5,0eh
  0.00%    0x00000144f724f9b8: vaddsd  xmm0,xmm0,xmm1
  6.05%    0x00000144f724f9bc: vaddsd  xmm0,xmm0,xmm9
  5.92%    0x00000144f724f9c1: vpshufd xmm3,xmm9,0eh
  0.00%    0x00000144f724f9c7: vaddsd  xmm0,xmm0,xmm3
  6.05%    0x00000144f724f9cb: vextractf128 xmm2,ymm9,1h
  0.00%    0x00000144f724f9d1: vaddsd  xmm0,xmm0,xmm2
  6.05%    0x00000144f724f9d5: vpshufd xmm3,xmm2,0eh
  0.00%    0x00000144f724f9da: vaddsd  xmm0,xmm0,xmm3    ;*dadd {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::vectorisedDoubleSumProduct@36 (line 44)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_vectorisedDoubleSumProduct_jmhTest::vectorisedDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  6.05%    0x00000144f724f9de: add     r11d,10h          ;*iinc {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::vectorisedDoubleSumProduct@38 (line 43)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_vectorisedDoubleSumProduct_jmhTest::vectorisedDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  0.00%    0x00000144f724f9e2: cmp     r11d,r8d
           0x00000144f724f9e5: jl      144f724f910h      ;*if_icmpge {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::vectorisedDoubleSumProduct@10 (line 43)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_vectorisedDoubleSumProduct_jmhTest::vectorisedDoubleSumProduct_thrpt_jmhStub@17 (line 119)
           0x00000144f724f9eb: mov     edx,r9d
           0x00000144f724f9ee: add     edx,0fffffffdh
           0x00000144f724f9f1: cmp     r9d,edx
           0x00000144f724f9f4: mov     r9d,80000000h
           0x00000144f724f9fa: cmovl   edx,r9d
  0.00%    0x00000144f724f9fe: cmp     r11d,edx
           0x00000144f724fa01: jl      144f724f7a4h
           0x00000144f724fa07: jmp     144f724f7dfh
           0x00000144f724fa0c: vxorpd  xmm0,xmm0,xmm0
           0x00000144f724fa10: jmp     144f724f807h
           0x00000144f724fa15: mov     edx,0ffffff86h
           0x00000144f724fa1a: mov     qword ptr [rsp+70h],rcx
           0x00000144f724fa1f: push    qword ptr [rsp+80h]
           0x00000144f724fa27: pop     qword ptr [rsp+78h]
           0x00000144f724fa2c: push    qword ptr [rsp+30h]
           0x00000144f724fa31: pop     qword ptr [rsp+28h]
....................................................................................................
 99.44%  <total for region 1>

Unrolling this will disable autovectorisation, but my Skylake chip can do 8 scalar floating point operations at once.


  @Benchmark
  public double unrolledDoubleSumProduct() {
    double sp1 = 0D;
    double sp2 = 0D;
    double sp3 = 0D;
    double sp4 = 0D;
    for (int i = 0; i < xd.length && i < yd.length; i += 4) {
      sp1 += xd[i] * yd[i];
      sp2 += xd[i + 1] * yd[i + 1];
      sp3 += xd[i + 2] * yd[i + 2];
      sp4 += xd[i + 3] * yd[i + 3];
    }
    return sp1 + sp2 + sp3 + sp4;
  }

Looking at the perfasm output, you can see this code is scalar but the additions are interleaved without interdependencies. This is enough to take down a crippled target.


....[Hottest Region 1]..............................................................................
c2, com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub, version 162 (79 bytes) 

                                                         ;   {section_word}
           0x0000024719311ddb: lea     r9,[r12+rcx*8]
           0x0000024719311ddf: lea     rcx,[r12+rbx*8]
           0x0000024719311de3: cmp     r11d,4h
           0x0000024719311de7: jle     24719311ee1h
           0x0000024719311ded: mov     r8d,4h
           0x0000024719311df3: nop     word ptr [rax+rax+0h]
           0x0000024719311dfc: nop                       ;*iload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@23 (line 55)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  4.20%    0x0000024719311e00: vmovsd  xmm0,qword ptr [rcx+r8*8+28h]
                                                         ;*daload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@116 (line 59)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  8.93%    0x0000024719311e07: vmulsd  xmm0,xmm0,mmword ptr [r9+r8*8+28h]
 15.10%    0x0000024719311e0e: vmovsd  xmm1,qword ptr [rcx+r8*8+18h]
                                                         ;*daload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@69 (line 57)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  3.95%    0x0000024719311e15: vmulsd  xmm1,xmm1,mmword ptr [r9+r8*8+18h]
  9.91%    0x0000024719311e1c: vmovsd  xmm2,qword ptr [rcx+r8*8+20h]
                                                         ;*daload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@92 (line 58)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  3.97%    0x0000024719311e23: vmulsd  xmm2,xmm2,mmword ptr [r9+r8*8+20h]
 11.04%    0x0000024719311e2a: vmovsd  xmm3,qword ptr [rcx+r8*8+10h]
                                                         ;*daload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@47 (line 56)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  3.86%    0x0000024719311e31: vmulsd  xmm3,xmm3,mmword ptr [r9+r8*8+10h]
  9.04%    0x0000024719311e38: vaddsd  xmm4,xmm4,xmm0    ;*dadd {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@118 (line 59)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  7.11%    0x0000024719311e3c: vaddsd  xmm7,xmm7,xmm3    ;*dadd {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@49 (line 56)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  7.03%    0x0000024719311e40: vaddsd  xmm5,xmm5,xmm2    ;*dadd {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@94 (line 58)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  7.29%    0x0000024719311e44: vaddsd  xmm6,xmm6,xmm1    ;*dadd {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@71 (line 57)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  3.58%    0x0000024719311e48: add     r8d,4h            ;*iinc {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@121 (line 55)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  4.39%    0x0000024719311e4c: cmp     r8d,r11d
  0.00%    0x0000024719311e4f: jl      24719311e00h      ;*if_icmpge {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@20 (line 55)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
           0x0000024719311e51: cmp     r8d,edi
           0x0000024719311e54: jnl     24719311c92h
           0x0000024719311e5a: nop                       ;*iload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@23 (line 55)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
           0x0000024719311e5c: cmp     r8d,r10d
           0x0000024719311e5f: jnl     24719311f15h      ;*if_icmpge {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@30 (line 55)
....................................................................................................
 99.39%  <total for region 1>

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
unrolledDoubleSumProduct thrpt 1 10 1912.140438 48.445308 ops/ms 1024
unrolledDoubleSumProduct thrpt 1 10 24.677122 0.510459 ops/ms 65536
vectorisedDoubleSumProduct thrpt 1 10 647.848021 10.508824 ops/ms 1024
vectorisedDoubleSumProduct thrpt 1 10 9.474097 0.479281 ops/ms 65536

So, if you realise that in your application the order of your array is irrelevant, you can write a tiny bit of extra code and get multiplicatively better performance. These results were produced with JDK9. When I tried with JDK10, the sum product was not vectorised, presumably because it has been noticed that it is unprofitable. This benchmark can be seen in full context at github.

Vertical Sum

I was motivated to write this post after Ioannis Tsakpinis shared a gist of a benchmark after reading a recent post about coaxing vectorisation into action for a simple floating point sum. The post was intended to be a prelude to a post about the wonders of paginated arrays. With a paginated array, autovectorisation pays off and is preferable to a manual unroll. The non-associativity of the operation is of course still violated, but I am working on the premise that this virtually never matters. I revisited this benchmark, with a paginated array this time.


  @Benchmark // inspired by Ioannis' code
  public double reduceUnrolledPaginated() {
    double a0 = 0.0;
    double a1 = 0.0;
    double a2 = 0.0;
    double a3 = 0.0;
    for (int i = 0; i < paginated.length; ++i) {
      double[] page = paginated[i];
      for (int j = 0; j < paginated[0].length; j += 4) {
        a0 += page[j + 0];
        a1 += page[j + 1];
        a2 += page[j + 2];
        a3 += page[j + 3];
      }
    }
    return a0 + a1 + a2 + a3;
  }

  @Benchmark
  public double reducePaginated() {
    double[] buffer = Arrays.copyOf(paginated[0], paginated[0].length);
    for (int i = 1; i < paginated.length; ++i) {
      double[] page = paginated[i];
      for (int j = 0; j < page.length && j < buffer.length; ++j) {
        buffer[j] += page[j];
      }
    }
    return reduceUnrolled(buffer);
  }

The array being paginated, requiring no offset calculations, is the perfect case for a vectorised loop here. Which is one reason why Java arrays should be paginated in application code.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
reducePaginated thrpt 1 10 597.046492 23.080803 ops/ms 1024
reducePaginated thrpt 1 10 42.801021 0.831318 ops/ms 65536
reducePaginated thrpt 1 10 1.503510 0.187167 ops/ms 1048576
reduceUnrolledPaginated thrpt 1 10 1311.433592 9.063721 ops/ms 1024
reduceUnrolledPaginated thrpt 1 10 19.448202 0.503753 ops/ms 65536
reduceUnrolledPaginated thrpt 1 10 1.052183 0.086555 ops/ms 1048576

Nevertheless, loop unrolling can be a significant boon for floating point arithmetic in Java. It feels dirty to me – it’s one of those things that people did before my time. Compilers know how to do it, if they are allowed to do so. If there is no place for fastfp in Java, I imagine the practice is here to stay.

Iterating Over a Bitset in Java

How fast can you iterate over a bitset? Daniel Lemire published a benchmark recently in support of a strategy using the number of trailing zeroes to skip over empty bits. I have used the same technique in Java several times in my hobby project SplitMap and this is something I am keen to optimise. I think that the best strategy depends on what you want to do with the set bits, and how sparse and uniformly distributed they are. I argue that the cost of iteration is less important than the constraints your API imposes on the caller, and whether the caller is free to exploit patterns in the data.

C2 Generates Good Code

If you think C++ is much faster than Java, you either don’t know much about Java or do lots of floating point arithmetic. This isn’t about benchmarking C++ against Java, but comparing the compilation outputs for a C++ implementation and a Java implementation shows that there won’t be much difference if your Java method gets hot. Only the time to performance will differ, and this is amortised over the lifetime of an application. The trailing zeroes implementation is probably the fastest technique in Java as well as in C++, but that is to ignore the optimisations you can’t apply to the callback if you use it too literally.

Compiling this C++ function with GCC yields the snippet of assembly taken from the loop kernel:


template <typename CALLBACK>
static void for_each(const long* bitmap, const int size, const CALLBACK& callback) {
    for (size_t k = 0; k < size; ++k) {
        long bitset = bitmap[k];
        while (bitset != 0) {
            callback((k * 64) + __builtin_ctzl(bitset));
            bitset ^= (bitset & -bitset);
        }
    }
}

The instruction tzcntl calculates the next set bit and blsr switches it off.


.L99:
  movq  %rdi, %rcx
  blsr  %ebx, %ebx
  call  _ZNSo3putEc
  movq  %rax, %rcx
  call  _ZNSo5flushEv
  testl  %ebx, %ebx
  je  .L96
.L100:
  xorl  %edx, %edx
  movq  %r12, %rcx
  tzcntl  %ebx, %edx
  addl  %ebp, %edx
  call  _ZNSolsEi
  movq  %rax, %rdi
  movq  (%rax), %rax
  movq  -24(%rax), %rax
  movq  240(%rdi,%rax), %rsi
  testq  %rsi, %rsi
  je  .L108
  cmpb  $0, 56(%rsi)
  jne  .L109
  movq  %rsi, %rcx
  call  _ZNKSt5ctypeIcE13_M_widen_initEv
  movq  (%rsi), %rax
  movl  $10, %edx
  movq  48(%rax), %rax
  cmpq  %r14, %rax
  je  .L99
  movq  %rsi, %rcx
  call  *%rax
  movsbl  %al, %edx
  jmp  .L99
  .p2align 4,,10

In Java, almost identical code is generated.


public void forEach(long[] bitmap, IntConsumer consumer) {
    for (int i = 0; i < bitmap.length; ++i) {
      long word = bitmap[i];
      while (word != 0) {
        consumer.accept(Long.SIZE * i + Long.numberOfTrailingZeros(word));
        word ^= Long.lowestOneBit(word);
      }
    }
  }

The key difference is that xor and blsi haven’t been fused into blsr, so the C++ code is probably slightly faster. A lambda function accumulating the contents of an array is inlined into this loop (the add comes from an inlined lambda, but notice how little time is spent adding compared to computing the bit to switch off in this sample produced by perfasm).


   .83%    0x000002d79d366a19: tzcnt   r9,rcx
  8.53%    0x000002d79d366a1e: add     r9d,ebx
  0.42%    0x000002d79d366a21: cmp     r9d,r8d
  0.00%    0x000002d79d366a24: jnb     2d79d366a4dh
  0.62%    0x000002d79d366a26: add     r10d,dword ptr [rdi+r9*4+10h]
 16.22%    0x000002d79d366a2b: vmovq   r11,xmm4
  6.68%    0x000002d79d366a30: mov     dword ptr [r11+10h],r10d
 27.92%    0x000002d79d366a34: blsi    r10,rcx
  0.55%    0x000002d79d366a39: xor     rcx,r10         
  0.10%    0x000002d79d366a3c: mov     r11,qword ptr [r15+70h]  

It’s this Java code, and its impact on which optimisations can be applied to the IntConsumer that this post focuses on. There are different principles, particularly related to inlining and vectorisation opportunities in C++, but this blog is about Java. Depending on what your callback does, you get different benchmark results and you should make different choices about how to do the iteration: you just can’t assess this in isolation.

Special Casing -1

Imagine you have an int[] containing data, and you are iterating over a mask or materialised predicate over that data. For each set bit, you want to add the corresponding entry in the array to a sum. In Java, that looks like this (you’ve already seen the generated assembly above):


  @Benchmark
  public int reduce() {
    int[] result = new int[1];
    forEach(bitmap, i -> result[0] += data[i]);
    return result[0];
  }

How fast can this get? It obviously depends on how full the bitset is. The worst case would be that it’s completely full, and it couldn’t get much better than if only one bit per word were set. The difference is noticeable, but scales by a factor less than the number of bits:

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: scenario
reduce thrpt 1 10 7.435909 0.017491 ops/ms FULL
reduce thrpt 1 10 260.305307 6.081961 ops/ms ONE_BIT_PER_WORD

But the important code here, the callback itself, is stuck at entry level compilation. There is no unrolling, no vectorisation, the adds can’t be pipelined because there is a data dependency on blsi and xor. We can do much better in some cases, and not much worse in others, just by treating -1 as a special case, profiting from optimisations that can now be applied inside the callback. Passing a different callback which consumes whole words costs a branch, but it’s often worth it. Here’s the iterator now:


  interface WordConsumer {
    void acceptWord(int wordIndex, long word);
  }

  public void forEach(long[] bitmap, IntConsumer intConsumer, WordConsumer wordConsumer) {
    for (int i = 0; i < bitmap.length; ++i) {
      long word = bitmap[i];
      if (word == -1L) {
        wordConsumer.acceptWord(i, word);
      } else {
        while (word != 0) {
          intConsumer.accept(Long.SIZE * i + Long.numberOfTrailingZeros(word));
          word ^= Long.lowestOneBit(word);
        }
      }
    }
  }

  @Benchmark
  public int reduceWithWordConsumer() {
    int[] result = new int[1];
    forEach(bitmap, i -> result[0] += data[i], (index, word) -> {
      if (word != -1L) {
        throw new IllegalStateException();
      }
      int sum = 0;
      for (int i = index * Long.SIZE; i < (index + 1) * Long.SIZE; ++i) {
        sum += data[i];
      }
      result[0] += sum;
    });
    return result[0];
  }

This really pays off when the bitset is full, but having that extra branch does seem to cost something even though it is never taken, whereas the full case improves 6x.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: scenario
reduce thrpt 1 10 7.401202 0.118648 ops/ms FULL
reduce thrpt 1 10 261.682016 4.155856 ops/ms ONE_BIT_PER_WORD
reduceWithWordConsumer thrpt 1 10 43.972759 0.993264 ops/ms FULL
reduceWithWordConsumer thrpt 1 10 222.824868 4.877147 ops/ms ONE_BIT_PER_WORD

We still don’t actually know the cost of the branch when it’s taken every now and then. To estimate it, we need a new scenario (or new scenarios) which mix full and sparse words. As you might expect, having the WordConsumer is great when one word in every few is full: the fast path is so much faster, it practically skips the word.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: scenario
reduce thrpt 1 10 157.358633 4.538679 ops/ms SPARSE_16_FULL_WORDS
reduceWithWordConsumer thrpt 1 10 257.041035 7.446404 ops/ms SPARSE_16_FULL_WORDS

So in this scenario, the branch has paid for itself. How? The data dependency has been removed with a countable loop. Here’s the perfasm output. Notice two things: long runs of add instructions, and the vastly reduced percentage against blsi. The time is now spent adding numbers up, not switching off least significant bits. This feels like progress.


  0.05%    0x000001dd5b35af03: add     ebx,dword ptr [rdi+r9*4+10h]
  0.31%    0x000001dd5b35af08: add     ebx,dword ptr [rdi+r11*4+14h]
  0.32%    0x000001dd5b35af0d: add     ebx,dword ptr [rdi+r11*4+18h]
  0.33%    0x000001dd5b35af12: add     ebx,dword ptr [rdi+r11*4+1ch]
  0.37%    0x000001dd5b35af17: add     ebx,dword ptr [rdi+r11*4+20h]
  0.34%    0x000001dd5b35af1c: add     ebx,dword ptr [rdi+r11*4+24h]
  0.39%    0x000001dd5b35af21: add     ebx,dword ptr [rdi+r11*4+28h]
  0.36%    0x000001dd5b35af26: add     ebx,dword ptr [rdi+r11*4+2ch]
  0.34%    0x000001dd5b35af2b: add     ebx,dword ptr [rdi+r11*4+30h]
  0.35%    0x000001dd5b35af30: add     ebx,dword ptr [rdi+r11*4+34h]
  0.38%    0x000001dd5b35af35: add     ebx,dword ptr [rdi+r11*4+38h]
  0.36%    0x000001dd5b35af3a: add     ebx,dword ptr [rdi+r11*4+3ch]
  0.49%    0x000001dd5b35af3f: add     ebx,dword ptr [rdi+r11*4+40h]
  0.39%    0x000001dd5b35af44: add     ebx,dword ptr [rdi+r11*4+44h]
  0.42%    0x000001dd5b35af49: add     ebx,dword ptr [rdi+r11*4+48h]
  0.39%    0x000001dd5b35af4e: add     ebx,dword ptr [rdi+r11*4+4ch]
...
  2.39%    0x000001dd5b35afe9: tzcnt   r11,rbx
  2.65%    0x000001dd5b35afee: add     r11d,r10d         
  2.15%    0x000001dd5b35aff1: cmp     r11d,r9d
  0.00%    0x000001dd5b35aff4: jnb     1dd5b35b04dh
  2.29%    0x000001dd5b35aff6: add     r8d,dword ptr [rdi+r11*4+10h]
 11.03%    0x000001dd5b35affb: vmovq   r11,xmm0
  2.45%    0x000001dd5b35b000: mov     dword ptr [r11+10h],r8d  
  3.14%    0x000001dd5b35b004: mov     r11,qword ptr [r15+70h]
  2.18%    0x000001dd5b35b008: blsi    r8,rbx
  2.23%    0x000001dd5b35b00d: xor     rbx,r8

Heroically ploughing through the full words tells a different story: blsi is up at 11%. This indicates more time is spent iterating rather than evaluating the callback.


  6.98%    0x0000019f106c6799: tzcnt   r9,rdi
  3.47%    0x0000019f106c679e: add     r9d,ebx           
  1.65%    0x0000019f106c67a1: cmp     r9d,r10d
           0x0000019f106c67a4: jnb     19f106c67cdh
  1.67%    0x0000019f106c67a6: add     r11d,dword ptr [r8+r9*4+10h]
 11.45%    0x0000019f106c67ab: vmovq   r9,xmm2
  3.20%    0x0000019f106c67b0: mov     dword ptr [r9+10h],r11d  
 11.31%    0x0000019f106c67b4: blsi    r11,rdi
  1.71%    0x0000019f106c67b9: xor     rdi,r11           

This shows the cost of a data dependency in a loop. The operation we want to perform is associative, so we could even vectorise this. In C++ that might happen automatically, or could be ensured with intrinsics, but C2 has various heuristics: it won’t try to vectorise a simple reduction, and 64 would probably be on the short side for most cases it would try to vectorise.

Acknowledging Runs

You might be tempted to transfer even more control to the callback, by accumulating runs and then calling the callback once per run. It simplifies the code to exclude incomplete start and end words from the run.


private interface RunConsumer {
    void acceptRun(int start, int end);
  }

  public void forEach(long[] bitmap, IntConsumer intConsumer, RunConsumer runConsumer) {
    int runStart = -1;
    for (int i = 0; i < bitmap.length; ++i) {
      long word = bitmap[i];
      if (word == -1L) {
        if (runStart == -1) {
          runStart = i;
        }
      } else {
        if (runStart != -1) {
          runConsumer.acceptRun(runStart * Long.SIZE, i * Long.SIZE);
          runStart = -1;
        }
        while (word != 0) {
          intConsumer.accept(Long.SIZE * i + Long.numberOfTrailingZeros(word));
          word ^= Long.lowestOneBit(word);
        }
      }
    }
    if (runStart != -1) {
      runConsumer.acceptRun(runStart * Long.SIZE, bitmap.length * Long.SIZE);
    }
  }

For a simple reduction, the extra complexity isn’t justified: you’re better off with the WordIterator.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: scenario
reduce thrpt 1 10 160.502749 2.960568 ops/ms SPARSE_16_FULL_WORDS
reduce thrpt 1 10 7.294747 0.186678 ops/ms FULL
reduce thrpt 1 10 258.064511 8.902233 ops/ms ONE_BIT_PER_WORD
reduce thrpt 1 10 159.613877 3.424432 ops/ms SPARSE_1_16_WORD_RUN
reduceWithRunConsumer thrpt 1 10 251.683131 6.799639 ops/ms SPARSE_16_FULL_WORDS
reduceWithRunConsumer thrpt 1 10 37.809154 0.723198 ops/ms FULL
reduceWithRunConsumer thrpt 1 10 218.133560 13.756779 ops/ms ONE_BIT_PER_WORD
reduceWithRunConsumer thrpt 1 10 140.896826 8.495777 ops/ms SPARSE_1_16_WORD_RUN
reduceWithWordConsumer thrpt 1 10 257.961783 5.892072 ops/ms SPARSE_16_FULL_WORDS
reduceWithWordConsumer thrpt 1 10 43.909471 0.601319 ops/ms FULL
reduceWithWordConsumer thrpt 1 10 213.731758 20.398077 ops/ms ONE_BIT_PER_WORD
reduceWithWordConsumer thrpt 1 10 258.280428 11.316647 ops/ms SPARSE_1_16_WORD_RUN

It’s simplistic to measure this and conclude that this is a bad approach though. There are several other dimensions to this problem:

  1. Vectorised callbacks
  2. Inlining failures preventing optimisations
  3. The number of runs and their lengths (i.e. your data and how you structure it)

Vectorisable Callbacks

There are real benefits to batching up callbacks if the workload in the callback can be vectorised. The code doesn’t need to get much more complicated to start benefitting from larger iteration batches. Mapping each bit to a scaled and squared value from the data array and storing it into an output array illustrates this.


  @Benchmark
  public void map(Blackhole bh) {
    forEach(bitmap, i -> output[i] = data[i] * data[i] * factor);
    bh.consume(output);
  }

  @Benchmark
  public void mapWithWordConsumer(Blackhole bh) {
    forEach(bitmap, i -> output[0] = data[i] * factor, (WordConsumer) (index, word) -> {
      if (word != -1L) {
        throw new IllegalStateException();
      }
      for (int i = index * Long.SIZE; i < (index + 1) * Long.SIZE; ++i) {
        output[i] = data[i] * data[i] * factor;
      }
    });
    bh.consume(output);
  }

  @Benchmark
  public void mapWithRunConsumer(Blackhole bh) {
    forEach(bitmap, i -> output[0] = data[i] * factor, (RunConsumer) (start, end) -> {
      for (int i = start; i < end; ++i) {
        output[i] = data[i] * data[i] * factor;
      }
    });
    bh.consume(output);
  }

The RunConsumer does much better in the full case, never much worse than the WordConsumer and always better than the basic strategy – even when there is only one run in the entire bitset, or when there are a few full words in an otherwise sparse bitset.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: scenario
map thrpt 1 10 127.876662 3.411741 ops/ms SPARSE_16_FULL_WORDS
map thrpt 1 10 10.598974 0.022404 ops/ms FULL
map thrpt 1 10 126.434666 18.608547 ops/ms ONE_BIT_PER_WORD
map thrpt 1 10 115.977840 20.449258 ops/ms SPARSE_1_16_WORD_RUN
mapWithRunConsumer thrpt 1 10 199.186167 8.138446 ops/ms SPARSE_16_FULL_WORDS
mapWithRunConsumer thrpt 1 10 64.230868 2.871434 ops/ms FULL
mapWithRunConsumer thrpt 1 10 219.963063 4.257561 ops/ms ONE_BIT_PER_WORD
mapWithRunConsumer thrpt 1 10 203.403804 6.907366 ops/ms SPARSE_1_16_WORD_RUN
mapWithWordConsumer thrpt 1 10 229.822235 5.276084 ops/ms SPARSE_16_FULL_WORDS
mapWithWordConsumer thrpt 1 10 48.381990 3.845642 ops/ms FULL
mapWithWordConsumer thrpt 1 10 218.907803 5.331011 ops/ms ONE_BIT_PER_WORD
mapWithWordConsumer thrpt 1 10 240.795280 10.204818 ops/ms SPARSE_1_16_WORD_RUN

This is simply because the callback was vectorised, and the style of the RunConsumer API allows this to be exploited. This can be seen with perfasm. Both the WordConsumer and RunConsumer are actually vectorised, but the thing to notice is that there are two hot regions in the WordConsumer benchmark: the iteration and the callback, this boundary is often crossed. On the other hand, the RunConsumer implementation spends most of its time in the callback.

WordConsumer


....[Hottest Region 1]..............................................................................
c2, com.openkappa.simd.iterate.generated.BitSetIterator_mapWithWordConsumer_jmhTest::mapWithWordConsumer_thrpt_jmhStub, version 172 (227 bytes) 
...
  1.55%    0x000001c2aa13c790: vmovdqu ymm1,ymmword ptr [r9+r10*4+10h]
  0.15%    0x000001c2aa13c797: vpmulld ymm1,ymm1,ymm1
  3.72%    0x000001c2aa13c79c: vpmulld ymm1,ymm1,ymm2
 16.02%    0x000001c2aa13c7a1: vmovdqu ymmword ptr [rdx+r10*4+10h],ymm1
  1.69%    0x000001c2aa13c7a8: movsxd  r8,r10d
  1.55%    0x000001c2aa13c7ab: vmovdqu ymm1,ymmword ptr [r9+r8*4+30h]
  1.46%    0x000001c2aa13c7b2: vpmulld ymm1,ymm1,ymm1
  1.71%    0x000001c2aa13c7b7: vpmulld ymm1,ymm1,ymm2
  3.20%    0x000001c2aa13c7bc: vmovdqu ymmword ptr [rdx+r8*4+30h],ymm1
  0.07%    0x000001c2aa13c7c3: add     r10d,10h          
  1.70%    0x000001c2aa13c7c7: cmp     r10d,r11d
           0x000001c2aa13c7ca: jl      1c2aa13c790h      
  0.02%    0x000001c2aa13c7cc: mov     r8,qword ptr [r15+70h]  
  1.50%    0x000001c2aa13c7d0: test    dword ptr [r8],eax  
  0.04%    0x000001c2aa13c7d3: cmp     r10d,r11d
           0x000001c2aa13c7d6: jl      1c2aa13c78ah
  0.05%    0x000001c2aa13c7d8: mov     r11d,dword ptr [rsp+5ch]
  0.02%    0x000001c2aa13c7dd: add     r11d,39h
  1.57%    0x000001c2aa13c7e1: mov     r8d,ecx
  0.02%    0x000001c2aa13c7e4: cmp     r8d,r11d
  0.06%    0x000001c2aa13c7e7: mov     ecx,80000000h
  0.02%    0x000001c2aa13c7ec: cmovl   r11d,ecx
  1.50%    0x000001c2aa13c7f0: cmp     r10d,r11d
           0x000001c2aa13c7f3: jnl     1c2aa13c819h
  0.02%    0x000001c2aa13c7f5: nop                       
  0.06%    0x000001c2aa13c7f8: vmovdqu ymm1,ymmword ptr [r9+r10*4+10h]
  0.21%    0x000001c2aa13c7ff: vpmulld ymm1,ymm1,ymm1
  2.16%    0x000001c2aa13c804: vpmulld ymm1,ymm1,ymm2
  1.80%    0x000001c2aa13c809: vmovdqu ymmword ptr [rdx+r10*4+10h],ymm1
...
 53.26%  <total for region 1>

RunConsumer


....[Hottest Region 1]..............................................................................
c2, com.openkappa.simd.iterate.BitSetIterator$$Lambda$44.1209658195::acceptRun, version 166 (816 bytes) 
...
  0.92%    0x0000016658954860: vmovdqu ymm0,ymmword ptr [rdx+r8*4+10h]
  1.31%    0x0000016658954867: vpmulld ymm0,ymm0,ymm0
  1.74%    0x000001665895486c: vpmulld ymm0,ymm0,ymm1
  4.55%    0x0000016658954871: vmovdqu ymmword ptr [rdi+r8*4+10h],ymm0
  0.69%    0x0000016658954878: movsxd  rcx,r8d
  0.01%    0x000001665895487b: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+30h]
  0.41%    0x0000016658954881: vpmulld ymm0,ymm0,ymm0
  0.78%    0x0000016658954886: vpmulld ymm0,ymm0,ymm1
  0.83%    0x000001665895488b: vmovdqu ymmword ptr [rdi+rcx*4+30h],ymm0
  0.25%    0x0000016658954891: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+50h]
  1.29%    0x0000016658954897: vpmulld ymm0,ymm0,ymm0
  1.51%    0x000001665895489c: vpmulld ymm0,ymm0,ymm1
  3.65%    0x00000166589548a1: vmovdqu ymmword ptr [rdi+rcx*4+50h],ymm0
  0.54%    0x00000166589548a7: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+70h]
  0.31%    0x00000166589548ad: vpmulld ymm0,ymm0,ymm0
  0.47%    0x00000166589548b2: vpmulld ymm0,ymm0,ymm1
  1.11%    0x00000166589548b7: vmovdqu ymmword ptr [rdi+rcx*4+70h],ymm0
  0.28%    0x00000166589548bd: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+90h]
  1.17%    0x00000166589548c6: vpmulld ymm0,ymm0,ymm0
  1.89%    0x00000166589548cb: vpmulld ymm0,ymm0,ymm1
  3.56%    0x00000166589548d0: vmovdqu ymmword ptr [rdi+rcx*4+90h],ymm0
  0.73%    0x00000166589548d9: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+0b0h]
  0.21%    0x00000166589548e2: vpmulld ymm0,ymm0,ymm0
  0.34%    0x00000166589548e7: vpmulld ymm0,ymm0,ymm1
  1.29%    0x00000166589548ec: vmovdqu ymmword ptr [rdi+rcx*4+0b0h],ymm0
  0.33%    0x00000166589548f5: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+0d0h]
  0.97%    0x00000166589548fe: vpmulld ymm0,ymm0,ymm0
  1.90%    0x0000016658954903: vpmulld ymm0,ymm0,ymm1
  3.59%    0x0000016658954908: vmovdqu ymmword ptr [rdi+rcx*4+0d0h],ymm0
  0.82%    0x0000016658954911: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+0f0h]
  0.18%    0x000001665895491a: vpmulld ymm0,ymm0,ymm0
  0.29%    0x000001665895491f: vpmulld ymm0,ymm0,ymm1
  1.25%    0x0000016658954924: vmovdqu ymmword ptr [rdi+rcx*4+0f0h],ymm0
  0.33%    0x000001665895492d: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+110h]
  1.10%    0x0000016658954936: vpmulld ymm0,ymm0,ymm0
  2.11%    0x000001665895493b: vpmulld ymm0,ymm0,ymm1
  3.67%    0x0000016658954940: vmovdqu ymmword ptr [rdi+rcx*4+110h],ymm0
  0.93%    0x0000016658954949: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+130h]
  0.13%    0x0000016658954952: vpmulld ymm0,ymm0,ymm0
  0.25%    0x0000016658954957: vpmulld ymm0,ymm0,ymm1
  1.35%    0x000001665895495c: vmovdqu ymmword ptr [rdi+rcx*4+130h],ymm0
  0.32%    0x0000016658954965: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+150h]
  0.93%    0x000001665895496e: vpmulld ymm0,ymm0,ymm0
  2.16%    0x0000016658954973: vpmulld ymm0,ymm0,ymm1
  3.73%    0x0000016658954978: vmovdqu ymmword ptr [rdi+rcx*4+150h],ymm0
  0.95%    0x0000016658954981: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+170h]
  0.14%    0x000001665895498a: vpmulld ymm0,ymm0,ymm0
  0.21%    0x000001665895498f: vpmulld ymm0,ymm0,ymm1
  1.39%    0x0000016658954994: vmovdqu ymmword ptr [rdi+rcx*4+170h],ymm0
  0.29%    0x000001665895499d: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+190h]
  1.42%    0x00000166589549a6: vpmulld ymm0,ymm0,ymm0
  2.61%    0x00000166589549ab: vpmulld ymm0,ymm0,ymm1
  4.42%    0x00000166589549b0: vmovdqu ymmword ptr [rdi+rcx*4+190h],ymm0
  1.01%    0x00000166589549b9: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+1b0h]
  0.10%    0x00000166589549c2: vpmulld ymm0,ymm0,ymm0
  0.17%    0x00000166589549c7: vpmulld ymm0,ymm0,ymm1
  1.46%    0x00000166589549cc: vmovdqu ymmword ptr [rdi+rcx*4+1b0h],ymm0
  0.27%    0x00000166589549d5: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+1d0h]
 13.60%    0x00000166589549de: vpmulld ymm0,ymm0,ymm0
  3.51%    0x00000166589549e3: vpmulld ymm0,ymm0,ymm1
  4.69%    0x00000166589549e8: vmovdqu ymmword ptr [rdi+rcx*4+1d0h],ymm0
  1.00%    0x00000166589549f1: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+1f0h]
  0.11%    0x00000166589549fa: vpmulld ymm0,ymm0,ymm0
  0.15%    0x00000166589549ff: vpmulld ymm0,ymm0,ymm1
  1.46%    0x0000016658954a04: vmovdqu ymmword ptr [rdi+rcx*4+1f0h],ymm0
                                                         
  0.26%    0x0000016658954a0d: add     r8d,80h           
  0.01%    0x0000016658954a14: cmp     r8d,r10d
           0x0000016658954a17: jl      16658954860h      
  0.00%    0x0000016658954a1d: mov     r14,qword ptr [r15+70h]  
  0.06%    0x0000016658954a21: test    dword ptr [r14],eax  
  0.17%    0x0000016658954a24: cmp     r8d,r10d
           0x0000016658954a27: jl      16658954860h
           0x0000016658954a2d: mov     r10d,r9d
           0x0000016658954a30: add     r10d,0fffffff9h
           0x0000016658954a34: cmp     r9d,r10d
  0.00%    0x0000016658954a37: cmovl   r10d,ebx
           0x0000016658954a3b: cmp     r8d,r10d
           0x0000016658954a3e: jnl     16658954a61h      
           0x0000016658954a40: vmovdqu ymm0,ymmword ptr [rdx+r8*4+10h]
  0.14%    0x0000016658954a47: vpmulld ymm0,ymm0,ymm0
  0.05%    0x0000016658954a4c: vpmulld ymm0,ymm0,ymm1
  0.03%    0x0000016658954a51: vmovdqu ymmword ptr [rdi+r8*4+10h],ymm0
...
 96.10%  <total for region 1>

Inlining

So far, everything has been inlined. Java optimistically assumes you only have one implementation and aggressively inlines at first, deoptimising to add a branch when it sees a second implementation, deoptimising again and replacing with a virtual call if it sees a third implementation. This doesn’t matter much usually, but the cost of this not only dwarfs any savings in an iteration strategy; it also prevents various optimisations which can be applied if the code is inlined. Once again, passing a batch of work into the callback completely ameliorates this, because even if the call is virtual, the callback itself might be hot and aggressively optimised. I haven’t benchmarked this because I think the point is self-evident to anyone who would read this far.

Number of runs

It’s clear to see from the benchmark results that the best choice of iteration strategy is sensitive to what you want to do with the data, but also how it is arranged. It is well documented in database literature that real data sets tend to contain runs. If you are building a bitmap index on some attribute of your data, and you sort your data by that attribute, you will have as many bitmaps as you have attribute values, and each attribute value bitmap will contain a single run. This is almost true for any index on attributes correlated with the attribute chosen for the sort order, and is completely untrue for uncorellated attributes. There are a range of iteration strategies to choose from, and the best iteration strategy for one index may not be the best for another.

My benchmarks are available at github.