Population Count in Java

How do you count the bits in a 32 bit integer? Since this is possible in a single instruction, popcntd, which is exposed by an intrinsic method in Java and several other languages, this is a completely academic question. Nevertheless, however futile, deriving an efficient expression is instructive.

A naive approach would be to check each of the 32 bits in sequence. This can be written in Java as follows:


  public static int populationCountCheckEachBit(int value) {
    int count = 0;
    for (int i = 0; i < Integer.SIZE; ++i) {
      if ((value & (1 << i)) != 0) {
        ++count;
      }
    }
    return count;
  }

This has constant and high execution time, even when most of the bits are unset: there will always be 32 left shifts and 32 intersections. There is no inherent data dependency in the loop above so it can probably be unrolled and pipelined, even so, it’s just too long to be practically useful. A less naive approach is to skip over the unset bits, which will actually be quite fast when the data is sparse.


  public static int populationCountSkipUnsetBits(int value) {
    int count = 0;
    while (value != 0) {
      value ^= value & -value;
      ++count;
    }
    return count;
  }

The code above calculates the lowest bit and unsets it until there are no bits left. In other languages, resetting the bit can use the blsr instruction, but C2 would emit code using blsi instruction and an xor here. This code will do well for sparse data, but has a data dependency and the performance will be absolutely terrible for dense data (such as small negative numbers).

Since an integer’s population count is the sum of the population counts of its constituent bytes, and the population count of a byte can only take 256 values, why not precompute a small lookup table containing the population counts for each possible byte? Then, with four masks, three right shifts, four moves and three additions, the population count can be calculated.


   private static int[] LOOKUP = {
           0, 1, 1, 2, 1, 2, 2, 3,
           1, 2, 2, 3, 2, 3, 3, 4,
           1, 2, 2, 3, 2, 3, 3, 4,
           2, 3, 3, 4, 3, 4, 4, 5,
           1, 2, 2, 3, 2, 3, 3, 4,
           2, 3, 3, 4, 3, 4, 4, 5,
           2, 3, 3, 4, 3, 4, 4, 5,
           3, 4, 4, 5, 4, 5, 5, 6,
           1, 2, 2, 3, 2, 3, 3, 4,
           2, 3, 3, 4, 3, 4, 4, 5,
           2, 3, 3, 4, 3, 4, 4, 5,
           3, 4, 4, 5, 4, 5, 5, 6,
           2, 3, 3, 4, 3, 4, 4, 5,
           3, 4, 4, 5, 4, 5, 5, 6,
           3, 4, 4, 5, 4, 5, 5, 6,
           4, 5, 5, 6, 5, 6, 6, 7,
           1, 2, 2, 3, 2, 3, 3, 4,
           2, 3, 3, 4, 3, 4, 4, 5,
           2, 3, 3, 4, 3, 4, 4, 5,
           3, 4, 4, 5, 4, 5, 5, 6,
           2, 3, 3, 4, 3, 4, 4, 5,
           3, 4, 4, 5, 4, 5, 5, 6,
           3, 4, 4, 5, 4, 5, 5, 6,
           4, 5, 5, 6, 5, 6, 6, 7,
           2, 3, 3, 4, 3, 4, 4, 5,
           3, 4, 4, 5, 4, 5, 5, 6,
           3, 4, 4, 5, 4, 5, 5, 6,
           4, 5, 5, 6, 5, 6, 6, 7,
           3, 4, 4, 5, 4, 5, 5, 6,
           4, 5, 5, 6, 5, 6, 6, 7,
           4, 5, 5, 6, 5, 6, 6, 7,
           5, 6, 6, 7, 6, 7, 7, 8
   };

  public static int populationCountWithLookupTable(int value) {
    return LOOKUP[value & 0xFF]
         + LOOKUP[(value & 0xFF00) >>> 8]
         + LOOKUP[(value & 0xFF0000) >>> 16]
         + LOOKUP[(value & 0xFF000000) >>> 24];
   }

This isn’t as stupid as it looks. The number of instructions is low and they can be pipelined easily. C2 obviously can’t autovectorise this, but I imagine this could possibly end up being quite fast (if used in a loop) once the Vector API becomes a reality. Lemire and Muła devised a fast vectorised population count algorithm based on a lookup table of precalculated population counts for each nibble. Their algorithm is used by clang to calculate the population count of an array, but is far beyond both the scope of this post and the capabilities of Java.

We can avoid storing the table while using very few instructions with a divide and conquer approach, writing the result in place. The first thing to notice is that the population count of N bits can be expressed in at most N bits. So, interpreting the integer as a 16 element string of 2-bit nibbles we can calculate each 2-bit population count and store it in the same 2 bit nibble.

The masks 0x55555555 and 0xAAAAAAAA each have alternating bits and are logical complements. Remember that the population count is the sum of the population counts of the even bits and the odd bits. The code below calculates the number of bits in each 2-bit nibble and stores the result into the same 2-bit nibble. It works because the addition can only carry left into a zero bit (the odd bits have all been shifted right).


     int output = (value & 0x55555555) // mask the even bits
                + ((value & 0xAAAAAAAA) >>> 1); // mask the odd bits and shift right so they line up with the even bits

By way of example, consider the input value 0b11001010101101010101010101010011. The population count is 17, and the output takes the value 0b10000101011001010101010101010010. Notice that no 2-bit nibble takes the value 0b11 – we have 16 values of either zero, one or two: 2 + 0 + 1 + 1 + 1 + 2 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 0 + 2 = 17. It’s not necessary to have two separate constants: (value & 0xAAAAAAAA) >>> 1 is equivalent to (value >>> 1) & 0x55555555. This saves a register.

We now have a smaller problem: we need to add up the 16 2-bit nibbles. The mask 0x33333333 covers all the even 2-bit nibbles, and the mask 0xCCCCCCCC covers all the odd 2-bit nibbles. Shifting the odd values right and adding them to the even ones gives eight nibbles consisting of the 4-bit population counts:


     value = (value & 0x55555555) + ((value >>> 1) & 0x55555555); 
     value = (value & 0x33333333) + ((value >>> 2) & 0x33333333); 

Like before, the expression (value & 0xCCCCCCCC) >>> 2 has been replaced by (value >>> 2) & 0x33333333 to save a constant. Now we have eight nibbles to add up into four bytes, after that we have two shorts, and finally a single integer. The complete method ends up as follows:


  public static int populationCountWithMasks(int value) {
    value = (value & 0x55555555) + ((value >>> 1) & 0x55555555);
    value = (value & 0x33333333) + ((value >>> 2) & 0x33333333);
    value = (value & 0x0F0F0F0F) + ((value >>> 4) & 0x0F0F0F0F);
    value = (value & 0x00FF00FF) + ((value >>> 8) & 0x00FF00FF);
    value = (value & 0x0000FFFF) + ((value >>> 16) & 0x0000FFFF);
    return value;
  }

You can almost see it already, but if you write the hexadecimal constants above in binary you will realise that this is quite an elegant solution: the masks look like a tree:

01010101010101010101010101010101
00110011001100110011001100110011
00001111000011110000111100001111
00000000111111110000000011111111
00000000000000001111111111111111

This elegance comes at a small cost. There are various profitable transformations, the simplest of which is the elision of the redundant final mask. The others are more involved and are covered in depth in chapter 5 of Hacker’s Delight. The end result can be seen in the Integer class.


    @HotSpotIntrinsicCandidate
    public static int bitCount(int i) {
        // HD, Figure 5-2
        i = i - ((i >>> 1) & 0x55555555);
        i = (i & 0x33333333) + ((i >>> 2) & 0x33333333);
        i = (i + (i >>> 4)) & 0x0f0f0f0f;
        i = i + (i >>> 8);
        i = i + (i >>> 16);
        return i & 0x3f;
    }

The method above is intrinsified by C2 to the instruction popcntd and this method is the only way to access the instruction from Java. If it’s not already obvious, the power of having this access can be shown with a comparative benchmark.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit
intrinsic thrpt 1 10 341.572057 1.983535 ops/us
lookupTable thrpt 1 10 205.373131 0.557472 ops/us
masks thrpt 1 10 191.744272 1.942700 ops/us
naive thrpt 1 10 26.651332 0.101285 ops/us
skipUnsetBits thrpt 1 10 94.125249 0.559893 ops/us

Despite its power, since no vectorisation of this operation is possible prior to the AVX-512 VPOPCNTD/VPOPCNTQ extension (available virtually nowhere), loops containing popcnt can quickly become bottlenecks. Looking beneath the surface is intriguing. I’m sure with explicit vectorisation the lookup approach could be powerful.

Floating Point: Manual Unrolling or Autovectorisation?

Java is very strict about floating point arithmetic. There’s even a keyword, strictfp, which allows you to make it stricter, ensuring you’ll get a potentially less precise but identical result wherever you run your program. There’s actually a JEP to make this the only behaviour. JLS 15.18.2 states clearly that floating point addition is not associative in Java, which means that JIT compilers have to respect the order of double[]s and float[]s when compiling code, even if it turns out the order is actually arbitrary to the application. This means they can’t vectorise or even pipeline loops containing interdependent floating point addition, can’t distribute multiplications over additions, can’t telescopically collapse multiplications: the list goes on. If you want this code to go faster you must work around the JIT compiler somehow. In C++, it’s possible to choose to treat floating point numbers as if they had the algebraic properties of the Reals. Using this option is often maligned, perhaps because it assumes much more than associativity, and applies to entire compilation units. A proposal for fastfp semantics at a class and method scope was withdrawn a long time ago.

Prior to the arrival of the Vector API, I’m interested in which vectorisation transformations are possible automatically. This is because I’ve found that many bottlenecks in applications are related to low single threaded performance, and others come from the premature usage of threads to solve these performance problems. Imagine how powerful multithreading could be if used only after saturating the intensity available on each core?

There are certain things you just can’t do in Java because of the language specification, and one of these is getting C2 to pipeline two or more dependent vpaddd instructions, so the maximum achievable floating point intensity is quite low. In fact, you can be better off giving up on autovectorisation and unrolling the loop yourself, allowing the pipelining of scalar additions. This depends intimately on your microarchitecture.

Double Precision Sum Product

C2 can automatically vectorise a sum product between two double[]s. It does this by issuing eight vmovdqu loads at once, then four vpmulpd multiplications at once, and then a long in-order scalar reduction. It does this with the simplest possible code, potentially rewarding simplicity with decent performance:


  @Benchmark
  public double vectorisedDoubleSumProduct() {
    double sp = 0D;
    for (int i = 0; i < xd.length && i < yd.length; ++i) {
      sp += xd[i] * yd[i];
    }
    return sp;
  }

Perfasm shows the problematic scalar reduction quite clearly:


....[Hottest Region 1]..............................................................................
c2, com.openkappa.simd.sumproduct.generated.SumProduct_vectorisedDoubleSumProduct_jmhTest::vectorisedDoubleSumProduct_thrpt_jmhStub, version 164 (238 bytes) 

           0x00000144f724f8e9: mov     r8d,r9d
           0x00000144f724f8ec: add     r8d,0fffffff1h
           0x00000144f724f8f0: cmp     r9d,r8d
           0x00000144f724f8f3: mov     edx,80000000h
           0x00000144f724f8f8: cmovl   r8d,edx
           0x00000144f724f8fc: cmp     r11d,r8d
           0x00000144f724f8ff: jnl     144f724f7dfh
           0x00000144f724f905: nop     word ptr [rax+rax+0h]  ;*iload_3 {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::vectorisedDoubleSumProduct@13 (line 43)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_vectorisedDoubleSumProduct_jmhTest::vectorisedDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  0.00%    0x00000144f724f910: vmovdqu ymm1,ymmword ptr [rsi+r11*8+70h]
  1.07%    0x00000144f724f917: vmovdqu ymm2,ymmword ptr [rax+r11*8+70h]
  2.49%    0x00000144f724f91e: vmovdqu ymm3,ymmword ptr [rsi+r11*8+50h]
  0.67%    0x00000144f724f925: vmovdqu ymm4,ymmword ptr [rax+r11*8+50h]
  0.75%    0x00000144f724f92c: vmovdqu ymm5,ymmword ptr [rsi+r11*8+30h]
  0.01%    0x00000144f724f933: vmovdqu ymm6,ymmword ptr [rax+r11*8+30h]
  1.52%    0x00000144f724f93a: vmovdqu ymm7,ymmword ptr [rsi+r11*8+10h]
                                                         ;*daload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::vectorisedDoubleSumProduct@34 (line 44)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_vectorisedDoubleSumProduct_jmhTest::vectorisedDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  0.01%    0x00000144f724f941: vmovdqu ymm8,ymmword ptr [rax+r11*8+10h]
           0x00000144f724f948: vmulpd  ymm9,ymm2,ymm1
  0.02%    0x00000144f724f94c: vmulpd  ymm7,ymm8,ymm7
  1.50%    0x00000144f724f950: vmulpd  ymm8,ymm4,ymm3
  0.02%    0x00000144f724f954: vmulpd  ymm10,ymm6,ymm5
  0.00%    0x00000144f724f958: vaddsd  xmm0,xmm0,xmm7
  1.51%    0x00000144f724f95c: vpshufd xmm1,xmm7,0eh
  0.00%    0x00000144f724f961: vaddsd  xmm0,xmm0,xmm1
  5.91%    0x00000144f724f965: vextractf128 xmm4,ymm7,1h
  0.01%    0x00000144f724f96b: vaddsd  xmm0,xmm0,xmm4
  6.56%    0x00000144f724f96f: vpshufd xmm1,xmm4,0eh
  0.01%    0x00000144f724f974: vaddsd  xmm0,xmm0,xmm1
  5.75%    0x00000144f724f978: vaddsd  xmm0,xmm0,xmm10
  5.71%    0x00000144f724f97d: vpshufd xmm4,xmm10,0eh
  0.00%    0x00000144f724f983: vaddsd  xmm0,xmm0,xmm4
  5.89%    0x00000144f724f987: vextractf128 xmm6,ymm10,1h
  0.01%    0x00000144f724f98d: vaddsd  xmm0,xmm0,xmm6
  6.01%    0x00000144f724f991: vpshufd xmm4,xmm6,0eh
  0.01%    0x00000144f724f996: vaddsd  xmm0,xmm0,xmm4
  5.95%    0x00000144f724f99a: vaddsd  xmm0,xmm0,xmm8
  5.98%    0x00000144f724f99f: vpshufd xmm1,xmm8,0eh
  0.01%    0x00000144f724f9a5: vaddsd  xmm0,xmm0,xmm1
  5.99%    0x00000144f724f9a9: vextractf128 xmm5,ymm8,1h
  0.00%    0x00000144f724f9af: vaddsd  xmm0,xmm0,xmm5
  5.93%    0x00000144f724f9b3: vpshufd xmm1,xmm5,0eh
  0.00%    0x00000144f724f9b8: vaddsd  xmm0,xmm0,xmm1
  6.05%    0x00000144f724f9bc: vaddsd  xmm0,xmm0,xmm9
  5.92%    0x00000144f724f9c1: vpshufd xmm3,xmm9,0eh
  0.00%    0x00000144f724f9c7: vaddsd  xmm0,xmm0,xmm3
  6.05%    0x00000144f724f9cb: vextractf128 xmm2,ymm9,1h
  0.00%    0x00000144f724f9d1: vaddsd  xmm0,xmm0,xmm2
  6.05%    0x00000144f724f9d5: vpshufd xmm3,xmm2,0eh
  0.00%    0x00000144f724f9da: vaddsd  xmm0,xmm0,xmm3    ;*dadd {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::vectorisedDoubleSumProduct@36 (line 44)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_vectorisedDoubleSumProduct_jmhTest::vectorisedDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  6.05%    0x00000144f724f9de: add     r11d,10h          ;*iinc {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::vectorisedDoubleSumProduct@38 (line 43)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_vectorisedDoubleSumProduct_jmhTest::vectorisedDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  0.00%    0x00000144f724f9e2: cmp     r11d,r8d
           0x00000144f724f9e5: jl      144f724f910h      ;*if_icmpge {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::vectorisedDoubleSumProduct@10 (line 43)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_vectorisedDoubleSumProduct_jmhTest::vectorisedDoubleSumProduct_thrpt_jmhStub@17 (line 119)
           0x00000144f724f9eb: mov     edx,r9d
           0x00000144f724f9ee: add     edx,0fffffffdh
           0x00000144f724f9f1: cmp     r9d,edx
           0x00000144f724f9f4: mov     r9d,80000000h
           0x00000144f724f9fa: cmovl   edx,r9d
  0.00%    0x00000144f724f9fe: cmp     r11d,edx
           0x00000144f724fa01: jl      144f724f7a4h
           0x00000144f724fa07: jmp     144f724f7dfh
           0x00000144f724fa0c: vxorpd  xmm0,xmm0,xmm0
           0x00000144f724fa10: jmp     144f724f807h
           0x00000144f724fa15: mov     edx,0ffffff86h
           0x00000144f724fa1a: mov     qword ptr [rsp+70h],rcx
           0x00000144f724fa1f: push    qword ptr [rsp+80h]
           0x00000144f724fa27: pop     qword ptr [rsp+78h]
           0x00000144f724fa2c: push    qword ptr [rsp+30h]
           0x00000144f724fa31: pop     qword ptr [rsp+28h]
....................................................................................................
 99.44%  <total for region 1>

Unrolling this will disable autovectorisation, but my Skylake chip can do 8 scalar floating point operations at once.


  @Benchmark
  public double unrolledDoubleSumProduct() {
    double sp1 = 0D;
    double sp2 = 0D;
    double sp3 = 0D;
    double sp4 = 0D;
    for (int i = 0; i < xd.length && i < yd.length; i += 4) {
      sp1 += xd[i] * yd[i];
      sp2 += xd[i + 1] * yd[i + 1];
      sp3 += xd[i + 2] * yd[i + 2];
      sp4 += xd[i + 3] * yd[i + 3];
    }
    return sp1 + sp2 + sp3 + sp4;
  }

Looking at the perfasm output, you can see this code is scalar but the additions are interleaved without interdependencies. This is enough to take down a crippled target.


....[Hottest Region 1]..............................................................................
c2, com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub, version 162 (79 bytes) 

                                                         ;   {section_word}
           0x0000024719311ddb: lea     r9,[r12+rcx*8]
           0x0000024719311ddf: lea     rcx,[r12+rbx*8]
           0x0000024719311de3: cmp     r11d,4h
           0x0000024719311de7: jle     24719311ee1h
           0x0000024719311ded: mov     r8d,4h
           0x0000024719311df3: nop     word ptr [rax+rax+0h]
           0x0000024719311dfc: nop                       ;*iload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@23 (line 55)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  4.20%    0x0000024719311e00: vmovsd  xmm0,qword ptr [rcx+r8*8+28h]
                                                         ;*daload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@116 (line 59)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  8.93%    0x0000024719311e07: vmulsd  xmm0,xmm0,mmword ptr [r9+r8*8+28h]
 15.10%    0x0000024719311e0e: vmovsd  xmm1,qword ptr [rcx+r8*8+18h]
                                                         ;*daload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@69 (line 57)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  3.95%    0x0000024719311e15: vmulsd  xmm1,xmm1,mmword ptr [r9+r8*8+18h]
  9.91%    0x0000024719311e1c: vmovsd  xmm2,qword ptr [rcx+r8*8+20h]
                                                         ;*daload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@92 (line 58)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  3.97%    0x0000024719311e23: vmulsd  xmm2,xmm2,mmword ptr [r9+r8*8+20h]
 11.04%    0x0000024719311e2a: vmovsd  xmm3,qword ptr [rcx+r8*8+10h]
                                                         ;*daload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@47 (line 56)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  3.86%    0x0000024719311e31: vmulsd  xmm3,xmm3,mmword ptr [r9+r8*8+10h]
  9.04%    0x0000024719311e38: vaddsd  xmm4,xmm4,xmm0    ;*dadd {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@118 (line 59)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  7.11%    0x0000024719311e3c: vaddsd  xmm7,xmm7,xmm3    ;*dadd {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@49 (line 56)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  7.03%    0x0000024719311e40: vaddsd  xmm5,xmm5,xmm2    ;*dadd {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@94 (line 58)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  7.29%    0x0000024719311e44: vaddsd  xmm6,xmm6,xmm1    ;*dadd {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@71 (line 57)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  3.58%    0x0000024719311e48: add     r8d,4h            ;*iinc {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@121 (line 55)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
  4.39%    0x0000024719311e4c: cmp     r8d,r11d
  0.00%    0x0000024719311e4f: jl      24719311e00h      ;*if_icmpge {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@20 (line 55)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
           0x0000024719311e51: cmp     r8d,edi
           0x0000024719311e54: jnl     24719311c92h
           0x0000024719311e5a: nop                       ;*iload {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@23 (line 55)
                                                         ; - com.openkappa.simd.sumproduct.generated.SumProduct_unrolledDoubleSumProduct_jmhTest::unrolledDoubleSumProduct_thrpt_jmhStub@17 (line 119)
           0x0000024719311e5c: cmp     r8d,r10d
           0x0000024719311e5f: jnl     24719311f15h      ;*if_icmpge {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.sumproduct.SumProduct::unrolledDoubleSumProduct@30 (line 55)
....................................................................................................
 99.39%  <total for region 1>

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
unrolledDoubleSumProduct thrpt 1 10 1912.140438 48.445308 ops/ms 1024
unrolledDoubleSumProduct thrpt 1 10 24.677122 0.510459 ops/ms 65536
vectorisedDoubleSumProduct thrpt 1 10 647.848021 10.508824 ops/ms 1024
vectorisedDoubleSumProduct thrpt 1 10 9.474097 0.479281 ops/ms 65536

So, if you realise that in your application the order of your array is irrelevant, you can write a tiny bit of extra code and get multiplicatively better performance. These results were produced with JDK9. When I tried with JDK10, the sum product was not vectorised, presumably because it has been noticed that it is unprofitable. This benchmark can be seen in full context at github.

Vertical Sum

I was motivated to write this post after Ioannis Tsakpinis shared a gist of a benchmark after reading a recent post about coaxing vectorisation into action for a simple floating point sum. The post was intended to be a prelude to a post about the wonders of paginated arrays. With a paginated array, autovectorisation pays off and is preferable to a manual unroll. The non-associativity of the operation is of course still violated, but I am working on the premise that this virtually never matters. I revisited this benchmark, with a paginated array this time.


  @Benchmark // inspired by Ioannis' code
  public double reduceUnrolledPaginated() {
    double a0 = 0.0;
    double a1 = 0.0;
    double a2 = 0.0;
    double a3 = 0.0;
    for (int i = 0; i < paginated.length; ++i) {
      double[] page = paginated[i];
      for (int j = 0; j < paginated[0].length; j += 4) {
        a0 += page[j + 0];
        a1 += page[j + 1];
        a2 += page[j + 2];
        a3 += page[j + 3];
      }
    }
    return a0 + a1 + a2 + a3;
  }

  @Benchmark
  public double reducePaginated() {
    double[] buffer = Arrays.copyOf(paginated[0], paginated[0].length);
    for (int i = 1; i < paginated.length; ++i) {
      double[] page = paginated[i];
      for (int j = 0; j < page.length && j < buffer.length; ++j) {
        buffer[j] += page[j];
      }
    }
    return reduceUnrolled(buffer);
  }

The array being paginated, requiring no offset calculations, is the perfect case for a vectorised loop here. Which is one reason why Java arrays should be paginated in application code.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
reducePaginated thrpt 1 10 597.046492 23.080803 ops/ms 1024
reducePaginated thrpt 1 10 42.801021 0.831318 ops/ms 65536
reducePaginated thrpt 1 10 1.503510 0.187167 ops/ms 1048576
reduceUnrolledPaginated thrpt 1 10 1311.433592 9.063721 ops/ms 1024
reduceUnrolledPaginated thrpt 1 10 19.448202 0.503753 ops/ms 65536
reduceUnrolledPaginated thrpt 1 10 1.052183 0.086555 ops/ms 1048576

Nevertheless, loop unrolling can be a significant boon for floating point arithmetic in Java. It feels dirty to me – it’s one of those things that people did before my time. Compilers know how to do it, if they are allowed to do so. If there is no place for fastfp in Java, I imagine the practice is here to stay.

Iterating Over a Bitset in Java

How fast can you iterate over a bitset? Daniel Lemire published a benchmark recently in support of a strategy using the number of trailing zeroes to skip over empty bits. I have used the same technique in Java several times in my hobby project SplitMap and this is something I am keen to optimise. I think that the best strategy depends on what you want to do with the set bits, and how sparse and uniformly distributed they are. I argue that the cost of iteration is less important than the constraints your API imposes on the caller, and whether the caller is free to exploit patterns in the data.

C2 Generates Good Code

If you think C++ is much faster than Java, you either don’t know much about Java or do lots of floating point arithmetic. This isn’t about benchmarking C++ against Java, but comparing the compilation outputs for a C++ implementation and a Java implementation shows that there won’t be much difference if your Java method gets hot. Only the time to performance will differ, and this is amortised over the lifetime of an application. The trailing zeroes implementation is probably the fastest technique in Java as well as in C++, but that is to ignore the optimisations you can’t apply to the callback if you use it too literally.

Compiling this C++ function with GCC yields the snippet of assembly taken from the loop kernel:


template <typename CALLBACK>
static void for_each(const long* bitmap, const int size, const CALLBACK& callback) {
    for (size_t k = 0; k < size; ++k) {
        long bitset = bitmap[k];
        while (bitset != 0) {
            callback((k * 64) + __builtin_ctzl(bitset));
            bitset ^= (bitset & -bitset);
        }
    }
}

The instruction tzcntl calculates the next set bit and blsr switches it off.


.L99:
  movq  %rdi, %rcx
  blsr  %ebx, %ebx
  call  _ZNSo3putEc
  movq  %rax, %rcx
  call  _ZNSo5flushEv
  testl  %ebx, %ebx
  je  .L96
.L100:
  xorl  %edx, %edx
  movq  %r12, %rcx
  tzcntl  %ebx, %edx
  addl  %ebp, %edx
  call  _ZNSolsEi
  movq  %rax, %rdi
  movq  (%rax), %rax
  movq  -24(%rax), %rax
  movq  240(%rdi,%rax), %rsi
  testq  %rsi, %rsi
  je  .L108
  cmpb  $0, 56(%rsi)
  jne  .L109
  movq  %rsi, %rcx
  call  _ZNKSt5ctypeIcE13_M_widen_initEv
  movq  (%rsi), %rax
  movl  $10, %edx
  movq  48(%rax), %rax
  cmpq  %r14, %rax
  je  .L99
  movq  %rsi, %rcx
  call  *%rax
  movsbl  %al, %edx
  jmp  .L99
  .p2align 4,,10

In Java, almost identical code is generated.


public void forEach(long[] bitmap, IntConsumer consumer) {
    for (int i = 0; i < bitmap.length; ++i) {
      long word = bitmap[i];
      while (word != 0) {
        consumer.accept(Long.SIZE * i + Long.numberOfTrailingZeros(word));
        word ^= Long.lowestOneBit(word);
      }
    }
  }

The key difference is that xor and blsi haven’t been fused into blsr, so the C++ code is probably slightly faster. A lambda function accumulating the contents of an array is inlined into this loop (the add comes from an inlined lambda, but notice how little time is spent adding compared to computing the bit to switch off in this sample produced by perfasm).


   .83%    0x000002d79d366a19: tzcnt   r9,rcx
  8.53%    0x000002d79d366a1e: add     r9d,ebx
  0.42%    0x000002d79d366a21: cmp     r9d,r8d
  0.00%    0x000002d79d366a24: jnb     2d79d366a4dh
  0.62%    0x000002d79d366a26: add     r10d,dword ptr [rdi+r9*4+10h]
 16.22%    0x000002d79d366a2b: vmovq   r11,xmm4
  6.68%    0x000002d79d366a30: mov     dword ptr [r11+10h],r10d
 27.92%    0x000002d79d366a34: blsi    r10,rcx
  0.55%    0x000002d79d366a39: xor     rcx,r10         
  0.10%    0x000002d79d366a3c: mov     r11,qword ptr [r15+70h]  

It’s this Java code, and its impact on which optimisations can be applied to the IntConsumer that this post focuses on. There are different principles, particularly related to inlining and vectorisation opportunities in C++, but this blog is about Java. Depending on what your callback does, you get different benchmark results and you should make different choices about how to do the iteration: you just can’t assess this in isolation.

Special Casing -1

Imagine you have an int[] containing data, and you are iterating over a mask or materialised predicate over that data. For each set bit, you want to add the corresponding entry in the array to a sum. In Java, that looks like this (you’ve already seen the generated assembly above):


  @Benchmark
  public int reduce() {
    int[] result = new int[1];
    forEach(bitmap, i -> result[0] += data[i]);
    return result[0];
  }

How fast can this get? It obviously depends on how full the bitset is. The worst case would be that it’s completely full, and it couldn’t get much better than if only one bit per word were set. The difference is noticeable, but scales by a factor less than the number of bits:

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: scenario
reduce thrpt 1 10 7.435909 0.017491 ops/ms FULL
reduce thrpt 1 10 260.305307 6.081961 ops/ms ONE_BIT_PER_WORD

But the important code here, the callback itself, is stuck at entry level compilation. There is no unrolling, no vectorisation, the adds can’t be pipelined because there is a data dependency on blsi and xor. We can do much better in some cases, and not much worse in others, just by treating -1 as a special case, profiting from optimisations that can now be applied inside the callback. Passing a different callback which consumes whole words costs a branch, but it’s often worth it. Here’s the iterator now:


  interface WordConsumer {
    void acceptWord(int wordIndex, long word);
  }

  public void forEach(long[] bitmap, IntConsumer intConsumer, WordConsumer wordConsumer) {
    for (int i = 0; i < bitmap.length; ++i) {
      long word = bitmap[i];
      if (word == -1L) {
        wordConsumer.acceptWord(i, word);
      } else {
        while (word != 0) {
          intConsumer.accept(Long.SIZE * i + Long.numberOfTrailingZeros(word));
          word ^= Long.lowestOneBit(word);
        }
      }
    }
  }

  @Benchmark
  public int reduceWithWordConsumer() {
    int[] result = new int[1];
    forEach(bitmap, i -> result[0] += data[i], (index, word) -> {
      if (word != -1L) {
        throw new IllegalStateException();
      }
      int sum = 0;
      for (int i = index * Long.SIZE; i < (index + 1) * Long.SIZE; ++i) {
        sum += data[i];
      }
      result[0] += sum;
    });
    return result[0];
  }

This really pays off when the bitset is full, but having that extra branch does seem to cost something even though it is never taken, whereas the full case improves 6x.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: scenario
reduce thrpt 1 10 7.401202 0.118648 ops/ms FULL
reduce thrpt 1 10 261.682016 4.155856 ops/ms ONE_BIT_PER_WORD
reduceWithWordConsumer thrpt 1 10 43.972759 0.993264 ops/ms FULL
reduceWithWordConsumer thrpt 1 10 222.824868 4.877147 ops/ms ONE_BIT_PER_WORD

We still don’t actually know the cost of the branch when it’s taken every now and then. To estimate it, we need a new scenario (or new scenarios) which mix full and sparse words. As you might expect, having the WordConsumer is great when one word in every few is full: the fast path is so much faster, it practically skips the word.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: scenario
reduce thrpt 1 10 157.358633 4.538679 ops/ms SPARSE_16_FULL_WORDS
reduceWithWordConsumer thrpt 1 10 257.041035 7.446404 ops/ms SPARSE_16_FULL_WORDS

So in this scenario, the branch has paid for itself. How? The data dependency has been removed with a countable loop. Here’s the perfasm output. Notice two things: long runs of add instructions, and the vastly reduced percentage against blsi. The time is now spent adding numbers up, not switching off least significant bits. This feels like progress.


  0.05%    0x000001dd5b35af03: add     ebx,dword ptr [rdi+r9*4+10h]
  0.31%    0x000001dd5b35af08: add     ebx,dword ptr [rdi+r11*4+14h]
  0.32%    0x000001dd5b35af0d: add     ebx,dword ptr [rdi+r11*4+18h]
  0.33%    0x000001dd5b35af12: add     ebx,dword ptr [rdi+r11*4+1ch]
  0.37%    0x000001dd5b35af17: add     ebx,dword ptr [rdi+r11*4+20h]
  0.34%    0x000001dd5b35af1c: add     ebx,dword ptr [rdi+r11*4+24h]
  0.39%    0x000001dd5b35af21: add     ebx,dword ptr [rdi+r11*4+28h]
  0.36%    0x000001dd5b35af26: add     ebx,dword ptr [rdi+r11*4+2ch]
  0.34%    0x000001dd5b35af2b: add     ebx,dword ptr [rdi+r11*4+30h]
  0.35%    0x000001dd5b35af30: add     ebx,dword ptr [rdi+r11*4+34h]
  0.38%    0x000001dd5b35af35: add     ebx,dword ptr [rdi+r11*4+38h]
  0.36%    0x000001dd5b35af3a: add     ebx,dword ptr [rdi+r11*4+3ch]
  0.49%    0x000001dd5b35af3f: add     ebx,dword ptr [rdi+r11*4+40h]
  0.39%    0x000001dd5b35af44: add     ebx,dword ptr [rdi+r11*4+44h]
  0.42%    0x000001dd5b35af49: add     ebx,dword ptr [rdi+r11*4+48h]
  0.39%    0x000001dd5b35af4e: add     ebx,dword ptr [rdi+r11*4+4ch]
...
  2.39%    0x000001dd5b35afe9: tzcnt   r11,rbx
  2.65%    0x000001dd5b35afee: add     r11d,r10d         
  2.15%    0x000001dd5b35aff1: cmp     r11d,r9d
  0.00%    0x000001dd5b35aff4: jnb     1dd5b35b04dh
  2.29%    0x000001dd5b35aff6: add     r8d,dword ptr [rdi+r11*4+10h]
 11.03%    0x000001dd5b35affb: vmovq   r11,xmm0
  2.45%    0x000001dd5b35b000: mov     dword ptr [r11+10h],r8d  
  3.14%    0x000001dd5b35b004: mov     r11,qword ptr [r15+70h]
  2.18%    0x000001dd5b35b008: blsi    r8,rbx
  2.23%    0x000001dd5b35b00d: xor     rbx,r8

Heroically ploughing through the full words tells a different story: blsi is up at 11%. This indicates more time is spent iterating rather than evaluating the callback.


  6.98%    0x0000019f106c6799: tzcnt   r9,rdi
  3.47%    0x0000019f106c679e: add     r9d,ebx           
  1.65%    0x0000019f106c67a1: cmp     r9d,r10d
           0x0000019f106c67a4: jnb     19f106c67cdh
  1.67%    0x0000019f106c67a6: add     r11d,dword ptr [r8+r9*4+10h]
 11.45%    0x0000019f106c67ab: vmovq   r9,xmm2
  3.20%    0x0000019f106c67b0: mov     dword ptr [r9+10h],r11d  
 11.31%    0x0000019f106c67b4: blsi    r11,rdi
  1.71%    0x0000019f106c67b9: xor     rdi,r11           

This shows the cost of a data dependency in a loop. The operation we want to perform is associative, so we could even vectorise this. In C++ that might happen automatically, or could be ensured with intrinsics, but C2 has various heuristics: it won’t try to vectorise a simple reduction, and 64 would probably be on the short side for most cases it would try to vectorise.

Acknowledging Runs

You might be tempted to transfer even more control to the callback, by accumulating runs and then calling the callback once per run. It simplifies the code to exclude incomplete start and end words from the run.


private interface RunConsumer {
    void acceptRun(int start, int end);
  }

  public void forEach(long[] bitmap, IntConsumer intConsumer, RunConsumer runConsumer) {
    int runStart = -1;
    for (int i = 0; i < bitmap.length; ++i) {
      long word = bitmap[i];
      if (word == -1L) {
        if (runStart == -1) {
          runStart = i;
        }
      } else {
        if (runStart != -1) {
          runConsumer.acceptRun(runStart * Long.SIZE, i * Long.SIZE);
          runStart = -1;
        }
        while (word != 0) {
          intConsumer.accept(Long.SIZE * i + Long.numberOfTrailingZeros(word));
          word ^= Long.lowestOneBit(word);
        }
      }
    }
    if (runStart != -1) {
      runConsumer.acceptRun(runStart * Long.SIZE, bitmap.length * Long.SIZE);
    }
  }

For a simple reduction, the extra complexity isn’t justified: you’re better off with the WordIterator.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: scenario
reduce thrpt 1 10 160.502749 2.960568 ops/ms SPARSE_16_FULL_WORDS
reduce thrpt 1 10 7.294747 0.186678 ops/ms FULL
reduce thrpt 1 10 258.064511 8.902233 ops/ms ONE_BIT_PER_WORD
reduce thrpt 1 10 159.613877 3.424432 ops/ms SPARSE_1_16_WORD_RUN
reduceWithRunConsumer thrpt 1 10 251.683131 6.799639 ops/ms SPARSE_16_FULL_WORDS
reduceWithRunConsumer thrpt 1 10 37.809154 0.723198 ops/ms FULL
reduceWithRunConsumer thrpt 1 10 218.133560 13.756779 ops/ms ONE_BIT_PER_WORD
reduceWithRunConsumer thrpt 1 10 140.896826 8.495777 ops/ms SPARSE_1_16_WORD_RUN
reduceWithWordConsumer thrpt 1 10 257.961783 5.892072 ops/ms SPARSE_16_FULL_WORDS
reduceWithWordConsumer thrpt 1 10 43.909471 0.601319 ops/ms FULL
reduceWithWordConsumer thrpt 1 10 213.731758 20.398077 ops/ms ONE_BIT_PER_WORD
reduceWithWordConsumer thrpt 1 10 258.280428 11.316647 ops/ms SPARSE_1_16_WORD_RUN

It’s simplistic to measure this and conclude that this is a bad approach though. There are several other dimensions to this problem:

  1. Vectorised callbacks
  2. Inlining failures preventing optimisations
  3. The number of runs and their lengths (i.e. your data and how you structure it)

Vectorisable Callbacks

There are real benefits to batching up callbacks if the workload in the callback can be vectorised. The code doesn’t need to get much more complicated to start benefitting from larger iteration batches. Mapping each bit to a scaled and squared value from the data array and storing it into an output array illustrates this.


  @Benchmark
  public void map(Blackhole bh) {
    forEach(bitmap, i -> output[i] = data[i] * data[i] * factor);
    bh.consume(output);
  }

  @Benchmark
  public void mapWithWordConsumer(Blackhole bh) {
    forEach(bitmap, i -> output[0] = data[i] * factor, (WordConsumer) (index, word) -> {
      if (word != -1L) {
        throw new IllegalStateException();
      }
      for (int i = index * Long.SIZE; i < (index + 1) * Long.SIZE; ++i) {
        output[i] = data[i] * data[i] * factor;
      }
    });
    bh.consume(output);
  }

  @Benchmark
  public void mapWithRunConsumer(Blackhole bh) {
    forEach(bitmap, i -> output[0] = data[i] * factor, (RunConsumer) (start, end) -> {
      for (int i = start; i < end; ++i) {
        output[i] = data[i] * data[i] * factor;
      }
    });
    bh.consume(output);
  }

The RunConsumer does much better in the full case, never much worse than the WordConsumer and always better than the basic strategy – even when there is only one run in the entire bitset, or when there are a few full words in an otherwise sparse bitset.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: scenario
map thrpt 1 10 127.876662 3.411741 ops/ms SPARSE_16_FULL_WORDS
map thrpt 1 10 10.598974 0.022404 ops/ms FULL
map thrpt 1 10 126.434666 18.608547 ops/ms ONE_BIT_PER_WORD
map thrpt 1 10 115.977840 20.449258 ops/ms SPARSE_1_16_WORD_RUN
mapWithRunConsumer thrpt 1 10 199.186167 8.138446 ops/ms SPARSE_16_FULL_WORDS
mapWithRunConsumer thrpt 1 10 64.230868 2.871434 ops/ms FULL
mapWithRunConsumer thrpt 1 10 219.963063 4.257561 ops/ms ONE_BIT_PER_WORD
mapWithRunConsumer thrpt 1 10 203.403804 6.907366 ops/ms SPARSE_1_16_WORD_RUN
mapWithWordConsumer thrpt 1 10 229.822235 5.276084 ops/ms SPARSE_16_FULL_WORDS
mapWithWordConsumer thrpt 1 10 48.381990 3.845642 ops/ms FULL
mapWithWordConsumer thrpt 1 10 218.907803 5.331011 ops/ms ONE_BIT_PER_WORD
mapWithWordConsumer thrpt 1 10 240.795280 10.204818 ops/ms SPARSE_1_16_WORD_RUN

This is simply because the callback was vectorised, and the style of the RunConsumer API allows this to be exploited. This can be seen with perfasm. Both the WordConsumer and RunConsumer are actually vectorised, but the thing to notice is that there are two hot regions in the WordConsumer benchmark: the iteration and the callback, this boundary is often crossed. On the other hand, the RunConsumer implementation spends most of its time in the callback.

WordConsumer


....[Hottest Region 1]..............................................................................
c2, com.openkappa.simd.iterate.generated.BitSetIterator_mapWithWordConsumer_jmhTest::mapWithWordConsumer_thrpt_jmhStub, version 172 (227 bytes) 
...
  1.55%    0x000001c2aa13c790: vmovdqu ymm1,ymmword ptr [r9+r10*4+10h]
  0.15%    0x000001c2aa13c797: vpmulld ymm1,ymm1,ymm1
  3.72%    0x000001c2aa13c79c: vpmulld ymm1,ymm1,ymm2
 16.02%    0x000001c2aa13c7a1: vmovdqu ymmword ptr [rdx+r10*4+10h],ymm1
  1.69%    0x000001c2aa13c7a8: movsxd  r8,r10d
  1.55%    0x000001c2aa13c7ab: vmovdqu ymm1,ymmword ptr [r9+r8*4+30h]
  1.46%    0x000001c2aa13c7b2: vpmulld ymm1,ymm1,ymm1
  1.71%    0x000001c2aa13c7b7: vpmulld ymm1,ymm1,ymm2
  3.20%    0x000001c2aa13c7bc: vmovdqu ymmword ptr [rdx+r8*4+30h],ymm1
  0.07%    0x000001c2aa13c7c3: add     r10d,10h          
  1.70%    0x000001c2aa13c7c7: cmp     r10d,r11d
           0x000001c2aa13c7ca: jl      1c2aa13c790h      
  0.02%    0x000001c2aa13c7cc: mov     r8,qword ptr [r15+70h]  
  1.50%    0x000001c2aa13c7d0: test    dword ptr [r8],eax  
  0.04%    0x000001c2aa13c7d3: cmp     r10d,r11d
           0x000001c2aa13c7d6: jl      1c2aa13c78ah
  0.05%    0x000001c2aa13c7d8: mov     r11d,dword ptr [rsp+5ch]
  0.02%    0x000001c2aa13c7dd: add     r11d,39h
  1.57%    0x000001c2aa13c7e1: mov     r8d,ecx
  0.02%    0x000001c2aa13c7e4: cmp     r8d,r11d
  0.06%    0x000001c2aa13c7e7: mov     ecx,80000000h
  0.02%    0x000001c2aa13c7ec: cmovl   r11d,ecx
  1.50%    0x000001c2aa13c7f0: cmp     r10d,r11d
           0x000001c2aa13c7f3: jnl     1c2aa13c819h
  0.02%    0x000001c2aa13c7f5: nop                       
  0.06%    0x000001c2aa13c7f8: vmovdqu ymm1,ymmword ptr [r9+r10*4+10h]
  0.21%    0x000001c2aa13c7ff: vpmulld ymm1,ymm1,ymm1
  2.16%    0x000001c2aa13c804: vpmulld ymm1,ymm1,ymm2
  1.80%    0x000001c2aa13c809: vmovdqu ymmword ptr [rdx+r10*4+10h],ymm1
...
 53.26%  <total for region 1>

RunConsumer


....[Hottest Region 1]..............................................................................
c2, com.openkappa.simd.iterate.BitSetIterator$$Lambda$44.1209658195::acceptRun, version 166 (816 bytes) 
...
  0.92%    0x0000016658954860: vmovdqu ymm0,ymmword ptr [rdx+r8*4+10h]
  1.31%    0x0000016658954867: vpmulld ymm0,ymm0,ymm0
  1.74%    0x000001665895486c: vpmulld ymm0,ymm0,ymm1
  4.55%    0x0000016658954871: vmovdqu ymmword ptr [rdi+r8*4+10h],ymm0
  0.69%    0x0000016658954878: movsxd  rcx,r8d
  0.01%    0x000001665895487b: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+30h]
  0.41%    0x0000016658954881: vpmulld ymm0,ymm0,ymm0
  0.78%    0x0000016658954886: vpmulld ymm0,ymm0,ymm1
  0.83%    0x000001665895488b: vmovdqu ymmword ptr [rdi+rcx*4+30h],ymm0
  0.25%    0x0000016658954891: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+50h]
  1.29%    0x0000016658954897: vpmulld ymm0,ymm0,ymm0
  1.51%    0x000001665895489c: vpmulld ymm0,ymm0,ymm1
  3.65%    0x00000166589548a1: vmovdqu ymmword ptr [rdi+rcx*4+50h],ymm0
  0.54%    0x00000166589548a7: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+70h]
  0.31%    0x00000166589548ad: vpmulld ymm0,ymm0,ymm0
  0.47%    0x00000166589548b2: vpmulld ymm0,ymm0,ymm1
  1.11%    0x00000166589548b7: vmovdqu ymmword ptr [rdi+rcx*4+70h],ymm0
  0.28%    0x00000166589548bd: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+90h]
  1.17%    0x00000166589548c6: vpmulld ymm0,ymm0,ymm0
  1.89%    0x00000166589548cb: vpmulld ymm0,ymm0,ymm1
  3.56%    0x00000166589548d0: vmovdqu ymmword ptr [rdi+rcx*4+90h],ymm0
  0.73%    0x00000166589548d9: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+0b0h]
  0.21%    0x00000166589548e2: vpmulld ymm0,ymm0,ymm0
  0.34%    0x00000166589548e7: vpmulld ymm0,ymm0,ymm1
  1.29%    0x00000166589548ec: vmovdqu ymmword ptr [rdi+rcx*4+0b0h],ymm0
  0.33%    0x00000166589548f5: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+0d0h]
  0.97%    0x00000166589548fe: vpmulld ymm0,ymm0,ymm0
  1.90%    0x0000016658954903: vpmulld ymm0,ymm0,ymm1
  3.59%    0x0000016658954908: vmovdqu ymmword ptr [rdi+rcx*4+0d0h],ymm0
  0.82%    0x0000016658954911: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+0f0h]
  0.18%    0x000001665895491a: vpmulld ymm0,ymm0,ymm0
  0.29%    0x000001665895491f: vpmulld ymm0,ymm0,ymm1
  1.25%    0x0000016658954924: vmovdqu ymmword ptr [rdi+rcx*4+0f0h],ymm0
  0.33%    0x000001665895492d: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+110h]
  1.10%    0x0000016658954936: vpmulld ymm0,ymm0,ymm0
  2.11%    0x000001665895493b: vpmulld ymm0,ymm0,ymm1
  3.67%    0x0000016658954940: vmovdqu ymmword ptr [rdi+rcx*4+110h],ymm0
  0.93%    0x0000016658954949: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+130h]
  0.13%    0x0000016658954952: vpmulld ymm0,ymm0,ymm0
  0.25%    0x0000016658954957: vpmulld ymm0,ymm0,ymm1
  1.35%    0x000001665895495c: vmovdqu ymmword ptr [rdi+rcx*4+130h],ymm0
  0.32%    0x0000016658954965: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+150h]
  0.93%    0x000001665895496e: vpmulld ymm0,ymm0,ymm0
  2.16%    0x0000016658954973: vpmulld ymm0,ymm0,ymm1
  3.73%    0x0000016658954978: vmovdqu ymmword ptr [rdi+rcx*4+150h],ymm0
  0.95%    0x0000016658954981: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+170h]
  0.14%    0x000001665895498a: vpmulld ymm0,ymm0,ymm0
  0.21%    0x000001665895498f: vpmulld ymm0,ymm0,ymm1
  1.39%    0x0000016658954994: vmovdqu ymmword ptr [rdi+rcx*4+170h],ymm0
  0.29%    0x000001665895499d: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+190h]
  1.42%    0x00000166589549a6: vpmulld ymm0,ymm0,ymm0
  2.61%    0x00000166589549ab: vpmulld ymm0,ymm0,ymm1
  4.42%    0x00000166589549b0: vmovdqu ymmword ptr [rdi+rcx*4+190h],ymm0
  1.01%    0x00000166589549b9: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+1b0h]
  0.10%    0x00000166589549c2: vpmulld ymm0,ymm0,ymm0
  0.17%    0x00000166589549c7: vpmulld ymm0,ymm0,ymm1
  1.46%    0x00000166589549cc: vmovdqu ymmword ptr [rdi+rcx*4+1b0h],ymm0
  0.27%    0x00000166589549d5: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+1d0h]
 13.60%    0x00000166589549de: vpmulld ymm0,ymm0,ymm0
  3.51%    0x00000166589549e3: vpmulld ymm0,ymm0,ymm1
  4.69%    0x00000166589549e8: vmovdqu ymmword ptr [rdi+rcx*4+1d0h],ymm0
  1.00%    0x00000166589549f1: vmovdqu ymm0,ymmword ptr [rdx+rcx*4+1f0h]
  0.11%    0x00000166589549fa: vpmulld ymm0,ymm0,ymm0
  0.15%    0x00000166589549ff: vpmulld ymm0,ymm0,ymm1
  1.46%    0x0000016658954a04: vmovdqu ymmword ptr [rdi+rcx*4+1f0h],ymm0
                                                         
  0.26%    0x0000016658954a0d: add     r8d,80h           
  0.01%    0x0000016658954a14: cmp     r8d,r10d
           0x0000016658954a17: jl      16658954860h      
  0.00%    0x0000016658954a1d: mov     r14,qword ptr [r15+70h]  
  0.06%    0x0000016658954a21: test    dword ptr [r14],eax  
  0.17%    0x0000016658954a24: cmp     r8d,r10d
           0x0000016658954a27: jl      16658954860h
           0x0000016658954a2d: mov     r10d,r9d
           0x0000016658954a30: add     r10d,0fffffff9h
           0x0000016658954a34: cmp     r9d,r10d
  0.00%    0x0000016658954a37: cmovl   r10d,ebx
           0x0000016658954a3b: cmp     r8d,r10d
           0x0000016658954a3e: jnl     16658954a61h      
           0x0000016658954a40: vmovdqu ymm0,ymmword ptr [rdx+r8*4+10h]
  0.14%    0x0000016658954a47: vpmulld ymm0,ymm0,ymm0
  0.05%    0x0000016658954a4c: vpmulld ymm0,ymm0,ymm1
  0.03%    0x0000016658954a51: vmovdqu ymmword ptr [rdi+r8*4+10h],ymm0
...
 96.10%  <total for region 1>

Inlining

So far, everything has been inlined. Java optimistically assumes you only have one implementation and aggressively inlines at first, deoptimising to add a branch when it sees a second implementation, deoptimising again and replacing with a virtual call if it sees a third implementation. This doesn’t matter much usually, but the cost of this not only dwarfs any savings in an iteration strategy; it also prevents various optimisations which can be applied if the code is inlined. Once again, passing a batch of work into the callback completely ameliorates this, because even if the call is virtual, the callback itself might be hot and aggressively optimised. I haven’t benchmarked this because I think the point is self-evident to anyone who would read this far.

Number of runs

It’s clear to see from the benchmark results that the best choice of iteration strategy is sensitive to what you want to do with the data, but also how it is arranged. It is well documented in database literature that real data sets tend to contain runs. If you are building a bitmap index on some attribute of your data, and you sort your data by that attribute, you will have as many bitmaps as you have attribute values, and each attribute value bitmap will contain a single run. This is almost true for any index on attributes correlated with the attribute chosen for the sort order, and is completely untrue for uncorellated attributes. There are a range of iteration strategies to choose from, and the best iteration strategy for one index may not be the best for another.

My benchmarks are available at github.

Faster Floating Point Reductions

At the moment, I am working on a hobby project called SplitMap, which aims to evaluate aggregations over complex boolean expressions as fast as possible using the same high level constructs of the streams API. It’s already capable of performing logic that takes vanilla parallel streams 20ms in under 300μs, but I think sub 100μs is possible for these calculations. I have reached the stage where the bottleneck is floating point reductions: Java won’t vectorise these because the result would be numerically unstable. This is a bit limiting, because it often doesn’t matter very much: nobody represents money with floating point numbers, and if you’re solving stiff differential equations it won’t be numerical stability that stops you from using a more suitable language. The reality is, somebody somewhere probably really cares about this, and that’s probably why there’s no fastfp semantics in the language. This ancient proposal lay stagnant before being withdrawn, and several optimisations just can’t be implemented by the JIT compiler without violating language guarantees. An intrinsic for FMA only arrived in Java 9, 15 years after JSR 84 was withdrawn, and vectorised FMA is only available in JDK10, 18 years after JSR 84 was proposed. Computing an average is hardly numerical computing, but Java just isn’t a friendly language for this sort of thing.

Back to SplitMap. At this point, I’ve already maxed out the parallelism I can get from the fork join pool, so I want the code below to vectorise to squeeze out more performance:

  private double reduce(double[] data) {
    double reduced = 0D;
    for (int i = 0; i < data.length; ++i) {
      reduced += data[i];
    }
    return reduced;
  }

Looking at this with perfasm, it’s clear that unrolled scalar code is generated:


  0.00%    0x000001db0c6f5730: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+10h]
  6.17%    0x000001db0c6f5736: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+18h]
  6.15%    0x000001db0c6f573c: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+20h]
  6.23%    0x000001db0c6f5742: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+28h]
  6.16%    0x000001db0c6f5748: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+30h]
  6.37%    0x000001db0c6f574e: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+38h]
  6.22%    0x000001db0c6f5754: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+40h]
  6.21%    0x000001db0c6f575a: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+48h]
  6.11%    0x000001db0c6f5760: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+50h]
  6.18%    0x000001db0c6f5766: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+58h]
  6.18%    0x000001db0c6f576c: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+60h]
  6.30%    0x000001db0c6f5772: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+68h]
  6.23%    0x000001db0c6f5778: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+70h]
  6.33%    0x000001db0c6f577e: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+78h]
  6.25%    0x000001db0c6f5784: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+80h]
  6.31%    0x000001db0c6f578d: vaddsd  xmm0,xmm0,mmword ptr [rdx+rdi*8+88h]

Let’s do something really dumb – allocate an array! Then I’ll reduce vertically onto it, and do a small horizontal reduction at the end.


  @Benchmark
  public double reduceBuffered() {
    double[] buffer = new double[1024];
    for (int i = 0; i < data.length; ++i) {
      buffer[i & 1023] += data[i];
    }
    return reduce(buffer);
  }

I benchmarked this against reduce. Using size 1024 as a sanity check, it’s clear the work is just being done twice, which is reassuring. Once the array gets a bit bigger, the gains of (what I think should be) the faster vertical reduction prior to the horizontal reduction pays for the array allocation.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
reduceBuffered thrpt 1 10 333.990079 3.542656 ops/ms 1024
reduceBuffered thrpt 1 10 18.639314 0.488300 ops/ms 65536
reduceBuffered thrpt 1 10 9.313916 0.343261 ops/ms 131072
reduceSimple thrpt 1 10 656.408971 1.771530 ops/ms 1024
reduceSimple thrpt 1 10 9.840417 0.032713 ops/ms 65536
reduceSimple thrpt 1 10 4.881353 0.076707 ops/ms 131072

The code in reduceBuffered produces a slightly different result because the elements are summated in a different order, though you’re hardly likely to notice. While, by any pragmatic definition, the function performs the same operation, it is semantically different. C2 actually doesn’t vectorise this code, and I have no idea why it’s so much faster! I won’t dwell on this because this is a dead end. In any case, here’s the perfasm output:


  0.20%    0x000001eaf2e49f50: vmovsd  xmm0,qword ptr [rdx+r9*8+10h]
  0.27%    0x000001eaf2e49f57: mov     ebx,r9d
  1.95%    0x000001eaf2e49f5a: add     ebx,0fh
  0.36%    0x000001eaf2e49f5d: and     ebx,3ffh
  0.21%    0x000001eaf2e49f63: mov     ecx,r9d
  0.30%    0x000001eaf2e49f66: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  2.96%    0x000001eaf2e49f6c: vaddsd  xmm0,xmm0,mmword ptr [r8+rcx*8+10h]
  0.61%    0x000001eaf2e49f73: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.50%    0x000001eaf2e49f7a: mov     ecx,r9d
  1.78%    0x000001eaf2e49f7d: inc     ecx
  0.37%    0x000001eaf2e49f7f: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.17%    0x000001eaf2e49f85: vmovsd  xmm0,qword ptr [r8+rcx*8+10h]
  0.49%    0x000001eaf2e49f8c: vaddsd  xmm0,xmm0,mmword ptr [rdx+r9*8+18h]
  2.51%    0x000001eaf2e49f93: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  1.21%    0x000001eaf2e49f9a: mov     ecx,r9d
  0.34%    0x000001eaf2e49f9d: add     ecx,2h
  0.90%    0x000001eaf2e49fa0: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.32%    0x000001eaf2e49fa6: vmovsd  xmm0,qword ptr [r8+rcx*8+10h]
  1.28%    0x000001eaf2e49fad: vaddsd  xmm0,xmm0,mmword ptr [rdx+r9*8+20h]
  1.43%    0x000001eaf2e49fb4: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  1.38%    0x000001eaf2e49fbb: vmovsd  xmm0,qword ptr [rdx+r9*8+28h]
  0.47%    0x000001eaf2e49fc2: mov     ecx,r9d
  0.28%    0x000001eaf2e49fc5: add     ecx,3h
  0.77%    0x000001eaf2e49fc8: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  1.34%    0x000001eaf2e49fce: vaddsd  xmm0,xmm0,mmword ptr [r8+rcx*8+10h]
  0.93%    0x000001eaf2e49fd5: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  1.36%    0x000001eaf2e49fdc: mov     ecx,r9d
  0.75%    0x000001eaf2e49fdf: add     ecx,4h
  0.43%    0x000001eaf2e49fe2: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.32%    0x000001eaf2e49fe8: vmovsd  xmm0,qword ptr [r8+rcx*8+10h]
  1.32%    0x000001eaf2e49fef: vaddsd  xmm0,xmm0,mmword ptr [rdx+r9*8+30h]
  1.32%    0x000001eaf2e49ff6: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  1.24%    0x000001eaf2e49ffd: mov     ecx,r9d
  0.40%    0x000001eaf2e4a000: add     ecx,5h
  0.79%    0x000001eaf2e4a003: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.46%    0x000001eaf2e4a009: vmovsd  xmm0,qword ptr [r8+rcx*8+10h]
  1.22%    0x000001eaf2e4a010: vaddsd  xmm0,xmm0,mmword ptr [rdx+r9*8+38h]
  4.18%    0x000001eaf2e4a017: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  1.61%    0x000001eaf2e4a01e: mov     ecx,r9d
  0.24%    0x000001eaf2e4a021: add     ecx,6h
  0.34%    0x000001eaf2e4a024: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.69%    0x000001eaf2e4a02a: vmovsd  xmm0,qword ptr [r8+rcx*8+10h]
  1.55%    0x000001eaf2e4a031: vaddsd  xmm0,xmm0,mmword ptr [rdx+r9*8+40h]
  0.95%    0x000001eaf2e4a038: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  1.97%    0x000001eaf2e4a03f: mov     ecx,r9d
  0.37%    0x000001eaf2e4a042: add     ecx,7h
  0.20%    0x000001eaf2e4a045: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.32%    0x000001eaf2e4a04b: vmovsd  xmm0,qword ptr [r8+rcx*8+10h]
  1.95%    0x000001eaf2e4a052: vaddsd  xmm0,xmm0,mmword ptr [rdx+r9*8+48h]
  0.92%    0x000001eaf2e4a059: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  1.95%    0x000001eaf2e4a060: mov     ecx,r9d
  0.42%    0x000001eaf2e4a063: add     ecx,8h
  0.35%    0x000001eaf2e4a066: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.20%    0x000001eaf2e4a06c: vmovsd  xmm0,qword ptr [r8+rcx*8+10h]
  1.97%    0x000001eaf2e4a073: vaddsd  xmm0,xmm0,mmword ptr [rdx+r9*8+50h]
  1.22%    0x000001eaf2e4a07a: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  1.67%    0x000001eaf2e4a081: mov     ecx,r9d
  0.48%    0x000001eaf2e4a084: add     ecx,9h
  0.39%    0x000001eaf2e4a087: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.32%    0x000001eaf2e4a08d: vmovsd  xmm0,qword ptr [r8+rcx*8+10h]
  1.66%    0x000001eaf2e4a094: vaddsd  xmm0,xmm0,mmword ptr [rdx+r9*8+58h]
  1.14%    0x000001eaf2e4a09b: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  1.45%    0x000001eaf2e4a0a2: mov     ecx,r9d
  0.65%    0x000001eaf2e4a0a5: add     ecx,0ah
  0.49%    0x000001eaf2e4a0a8: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.39%    0x000001eaf2e4a0ae: vmovsd  xmm0,qword ptr [r8+rcx*8+10h]
  1.45%    0x000001eaf2e4a0b5: vaddsd  xmm0,xmm0,mmword ptr [rdx+r9*8+60h]
  1.32%    0x000001eaf2e4a0bc: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  1.39%    0x000001eaf2e4a0c3: mov     ecx,r9d
  0.39%    0x000001eaf2e4a0c6: add     ecx,0bh
  0.65%    0x000001eaf2e4a0c9: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.53%    0x000001eaf2e4a0cf: vmovsd  xmm0,qword ptr [r8+rcx*8+10h]
  1.37%    0x000001eaf2e4a0d6: vaddsd  xmm0,xmm0,mmword ptr [rdx+r9*8+68h]
  1.22%    0x000001eaf2e4a0dd: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  1.46%    0x000001eaf2e4a0e4: mov     ecx,r9d
  0.42%    0x000001eaf2e4a0e7: add     ecx,0ch
  0.40%    0x000001eaf2e4a0ea: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.60%    0x000001eaf2e4a0f0: vmovsd  xmm0,qword ptr [r8+rcx*8+10h]
  1.47%    0x000001eaf2e4a0f7: vaddsd  xmm0,xmm0,mmword ptr [rdx+r9*8+70h]
  1.04%    0x000001eaf2e4a0fe: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  1.74%    0x000001eaf2e4a105: mov     ecx,r9d
  0.37%    0x000001eaf2e4a108: add     ecx,0dh
  0.43%    0x000001eaf2e4a10b: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.36%    0x000001eaf2e4a111: vmovsd  xmm0,qword ptr [r8+rcx*8+10h]
  1.68%    0x000001eaf2e4a118: vaddsd  xmm0,xmm0,mmword ptr [rdx+r9*8+78h]
  2.82%    0x000001eaf2e4a11f: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  2.04%    0x000001eaf2e4a126: mov     ecx,r9d
  0.19%    0x000001eaf2e4a129: add     ecx,0eh
  0.29%    0x000001eaf2e4a12c: and     ecx,3ffh          ;*iand {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@22 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)
  0.36%    0x000001eaf2e4a132: vmovsd  xmm0,qword ptr [r8+rcx*8+10h]
  2.02%    0x000001eaf2e4a139: vaddsd  xmm0,xmm0,mmword ptr [rdx+r9*8+80h]
  0.76%    0x000001eaf2e4a143: vmovsd  qword ptr [r8+rcx*8+10h],xmm0
  1.99%    0x000001eaf2e4a14a: vmovsd  xmm0,qword ptr [rdx+r9*8+88h]
  0.39%    0x000001eaf2e4a154: vaddsd  xmm0,xmm0,mmword ptr [r8+rbx*8+10h]
  0.56%    0x000001eaf2e4a15b: vmovsd  qword ptr [r8+rbx*8+10h],xmm0
                                                         ;*dastore {reexecute=0 rethrow=0 return_oop=0}
                                                         ; - com.openkappa.simd.reduction.ReduceArray::reduceBuffered@32 (line 36)
                                                         ; - com.openkappa.simd.reduction.generated.ReduceArray_reduceBuffered_jmhTest::reduceBuffered_thrpt_jmhStub@17 (line 119)

Using the same idea, but employing a trick I’ve used before for matrix multiplication, the code gets a lot faster!

  @Benchmark
  public double reduceVectorised() {
    double[] buffer = new double[1024];
    double[] temp = new double[1024];
    for (int i = 0; i < data.length >>> 10; ++i) {
      System.arraycopy(data, i * 1024, temp, 0,  temp.length);
      for (int j = 0; j < 1024; ++j) {
        buffer[j] += temp[j];
      }
    }
    return reduce(buffer);
  }

This generates a vectorised main loop:


  0.05%    0x000001e3fc0d71e0: vmovdqu ymm0,ymmword ptr [r9+r10*8+10h]
  0.16%    0x000001e3fc0d71e7: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+10h]
  1.26%    0x000001e3fc0d71ee: vmovdqu ymmword ptr [r13+r10*8+10h],ymm0
  0.39%    0x000001e3fc0d71f5: vmovdqu ymm0,ymmword ptr [r9+r10*8+30h]
  0.18%    0x000001e3fc0d71fc: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+30h]
  0.93%    0x000001e3fc0d7203: vmovdqu ymmword ptr [r13+r10*8+30h],ymm0
  0.68%    0x000001e3fc0d720a: vmovdqu ymm0,ymmword ptr [r9+r10*8+50h]
  0.17%    0x000001e3fc0d7211: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+50h]
  0.86%    0x000001e3fc0d7218: vmovdqu ymmword ptr [r13+r10*8+50h],ymm0
  0.73%    0x000001e3fc0d721f: vmovdqu ymm0,ymmword ptr [r9+r10*8+70h]
  0.19%    0x000001e3fc0d7226: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+70h]
  0.86%    0x000001e3fc0d722d: vmovdqu ymmword ptr [r13+r10*8+70h],ymm0
  0.78%    0x000001e3fc0d7234: vmovdqu ymm0,ymmword ptr [r9+r10*8+90h]
  0.17%    0x000001e3fc0d723e: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+90h]
  0.75%    0x000001e3fc0d7248: vmovdqu ymmword ptr [r13+r10*8+90h],ymm0
  0.84%    0x000001e3fc0d7252: vmovdqu ymm0,ymmword ptr [r9+r10*8+0b0h]
  0.15%    0x000001e3fc0d725c: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+0b0h]
  0.64%    0x000001e3fc0d7266: vmovdqu ymmword ptr [r13+r10*8+0b0h],ymm0
  0.92%    0x000001e3fc0d7270: vmovdqu ymm0,ymmword ptr [r9+r10*8+0d0h]
  0.15%    0x000001e3fc0d727a: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+0d0h]
  0.71%    0x000001e3fc0d7284: vmovdqu ymmword ptr [r13+r10*8+0d0h],ymm0
  0.91%    0x000001e3fc0d728e: vmovdqu ymm0,ymmword ptr [r9+r10*8+0f0h]
  0.15%    0x000001e3fc0d7298: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+0f0h]
  0.74%    0x000001e3fc0d72a2: vmovdqu ymmword ptr [r13+r10*8+0f0h],ymm0
  0.96%    0x000001e3fc0d72ac: vmovdqu ymm0,ymmword ptr [r9+r10*8+110h]
  0.12%    0x000001e3fc0d72b6: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+110h]
  0.70%    0x000001e3fc0d72c0: vmovdqu ymmword ptr [r13+r10*8+110h],ymm0
  0.99%    0x000001e3fc0d72ca: vmovdqu ymm0,ymmword ptr [r9+r10*8+130h]
  0.13%    0x000001e3fc0d72d4: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+130h]
  0.71%    0x000001e3fc0d72de: vmovdqu ymmword ptr [r13+r10*8+130h],ymm0
  0.94%    0x000001e3fc0d72e8: vmovdqu ymm0,ymmword ptr [r9+r10*8+150h]
  0.12%    0x000001e3fc0d72f2: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+150h]
  0.70%    0x000001e3fc0d72fc: vmovdqu ymmword ptr [r13+r10*8+150h],ymm0
  1.01%    0x000001e3fc0d7306: vmovdqu ymm0,ymmword ptr [r9+r10*8+170h]
  0.14%    0x000001e3fc0d7310: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+170h]
  0.75%    0x000001e3fc0d731a: vmovdqu ymmword ptr [r13+r10*8+170h],ymm0
  1.00%    0x000001e3fc0d7324: vmovdqu ymm0,ymmword ptr [r9+r10*8+190h]
  0.13%    0x000001e3fc0d732e: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+190h]
  0.67%    0x000001e3fc0d7338: vmovdqu ymmword ptr [r13+r10*8+190h],ymm0
  0.97%    0x000001e3fc0d7342: vmovdqu ymm0,ymmword ptr [r9+r10*8+1b0h]
  0.14%    0x000001e3fc0d734c: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+1b0h]
  0.72%    0x000001e3fc0d7356: vmovdqu ymmword ptr [r13+r10*8+1b0h],ymm0
  0.99%    0x000001e3fc0d7360: vmovdqu ymm0,ymmword ptr [r9+r10*8+1d0h]
  0.12%    0x000001e3fc0d736a: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+1d0h]
  0.69%    0x000001e3fc0d7374: vmovdqu ymmword ptr [r13+r10*8+1d0h],ymm0
  0.96%    0x000001e3fc0d737e: vmovdqu ymm0,ymmword ptr [r9+r10*8+1f0h]
  0.14%    0x000001e3fc0d7388: vaddpd  ymm0,ymm0,ymmword ptr [r13+r10*8+1f0h]
  0.70%    0x000001e3fc0d7392: vmovdqu ymmword ptr [r13+r10*8+1f0h],ymm0

This implementation is more than 3x faster than the original code, which includes the fuss of the buffers and copies. Wouldn’t it be nice to be able to opt in to fast floating point semantics somehow? Here are the final results (as usual, this is a throughput benchmark and higher is better):

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
reduceBuffered thrpt 1 10 329.935462 6.295871 ops/ms 1024
reduceBuffered thrpt 1 10 18.380467 0.455724 ops/ms 65536
reduceBuffered thrpt 1 10 9.257122 0.402876 ops/ms 131072
reduceSimple thrpt 1 10 654.809021 5.812795 ops/ms 1024
reduceSimple thrpt 1 10 9.694730 0.325011 ops/ms 65536
reduceSimple thrpt 1 10 4.772520 0.265691 ops/ms 131072
reduceVectorised thrpt 1 10 287.712794 27.492846 ops/ms 1024
reduceVectorised thrpt 1 10 34.454235 1.293985 ops/ms 65536
reduceVectorised thrpt 1 10 17.867701 0.813367 ops/ms 131072

The benchmark is on github.

Sum of Squares

Streams and lambdas, especially the limited support offered for primitive types, are a fantastic addition to the Java language. They’re not supposed to be fast, but how do these features compare to a good old for loop? For a simple calculation amenable to instruction level parallelism, I compare modern and traditional implementations and observe the differences in instructions generated.

Sum of Squares

The sum of squares is the building block of a linear regression analysis so is ubiquitous in statistical computing. It is associative and therefore data-parallel. I compare four implementations: a sequential stream wrapping an array, a parallel stream wrapping an array, a generative sequential stream and a traditional for loop. The benchmark code is on github.


  @Param({"1024", "8192"})
  int size;

  private double[] data;

  @Setup(Level.Iteration)
  public void init() {
    this.data = createDoubleArray(size);
  }

  @Benchmark
  public double SS_SequentialStream() {
    return DoubleStream.of(data)
            .map(x -> x * x)
            .reduce((x, y) -> x + y)
            .orElse(0D);
  }

  @Benchmark
  public double SS_ParallelStream() {
    return DoubleStream.of(data)
            .parallel()
            .map(x -> x * x)
            .reduce((x, y) -> x + y)
            .orElse(0);
  }

  @Benchmark
  public double SS_ForLoop() {
    double result = 0D;
    for (int i = 0; i < data.length; ++i) {
      result += data[i] * data[i];
    }
    return result;
  }

  @Benchmark
  public double SS_GenerativeSequentialStream() {
    return IntStream.iterate(0, i -> i < size, i -> i + 1)
            .mapToDouble(i -> data[i])
            .map(x -> x * x)
            .reduce((x, y) -> x + y)
            .orElse(0);
  }

I must admit I prefer the readability of the stream versions, but let’s see if there is a comedown after the syntactic sugar rush.

Running a Benchmark

I compare the four implementations on an array of one million doubles. I am using JDK 9.0.1, VM 9.0.1+11 on a fairly powerful laptop with 8 processors:

$ cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 94
model name      : Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
stepping        : 3
cpu MHz         : 2592.000
cache size      : 256 KB
physical id     : 0
siblings        : 8
core id         : 0
cpu cores       : 4
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 22
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe pni dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt aes xsave osxsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

Before running the benchmark we might expect the for loop and stream to have similar performance, and the parallel version to be about eight times faster (though remember that the arrays aren’t too big). The generative version is very similar to the for loop so a slow down might not be expected.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
SS_ForLoop thrpt 1 10 258351.774491 39797.567968 ops/s 1024
SS_ForLoop thrpt 1 10 29463.408428 4814.826388 ops/s 8192
SS_GenerativeSequentialStream thrpt 1 10 219699.607567 9095.569546 ops/s 1024
SS_GenerativeSequentialStream thrpt 1 10 28351.900454 828.513989 ops/s 8192
SS_ParallelStream thrpt 1 10 22827.821827 2826.577213 ops/s 1024
SS_ParallelStream thrpt 1 10 23230.623610 273.415352 ops/s 8192
SS_SequentialStream thrpt 1 10 225431.985145 9051.538442 ops/s 1024
SS_SequentialStream thrpt 1 10 29123.734157 1333.721437 ops/s 8192

The for loop and stream are similar. The parallel version is a long way behind (yes that’s right: more threads less power), but exhibits constant scaling (incidentally, a measurement like this is a good way to guess the minimum unit of work in a parallelised implementation). If the data is large it could become profitable to use it. The generative stream is surprisingly good, almost as good as the version that wraps the array, though there is a fail-safe way to slow it down: add a limit clause to the method chain (try it…).

Profiling with perfasm, it is clear that the for loop body is being vectorised, but only the loads and multiplications are done in parallel – the complicated string of SSE instructions is the reduction, which must be done in order.

<-- unrolled load -->
  0.01%    0x00000243d8969170: vmovdqu ymm1,ymmword ptr [r11+r8*8+0f0h]
  0.07%    0x00000243d896917a: vmovdqu ymm2,ymmword ptr [r11+r8*8+0d0h]
  0.75%    0x00000243d8969184: vmovdqu ymm3,ymmword ptr [r11+r8*8+0b0h]
  0.01%    0x00000243d896918e: vmovdqu ymm4,ymmword ptr [r11+r8*8+90h]
  0.02%    0x00000243d8969198: vmovdqu ymm5,ymmword ptr [r11+r8*8+70h]
  0.03%    0x00000243d896919f: vmovdqu ymm6,ymmword ptr [r11+r8*8+50h]
  0.77%    0x00000243d89691a6: vmovdqu ymm10,ymmword ptr [r11+r8*8+30h]
  0.02%    0x00000243d89691ad: vmovdqu ymm7,ymmword ptr [r11+r8*8+10h]
<-- multiplication starts -->
  0.01%    0x00000243d89691b4: vmulpd  ymm1,ymm1,ymm1
  0.02%    0x00000243d89691b8: vmovdqu ymmword ptr [rsp+28h],ymm1
  0.76%    0x00000243d89691be: vmulpd  ymm15,ymm7,ymm7
  0.00%    0x00000243d89691c2: vmulpd  ymm12,ymm2,ymm2
  0.01%    0x00000243d89691c6: vmulpd  ymm7,ymm3,ymm3
  0.02%    0x00000243d89691ca: vmulpd  ymm8,ymm4,ymm4
  0.72%    0x00000243d89691ce: vmulpd  ymm9,ymm5,ymm5
  0.00%    0x00000243d89691d2: vmulpd  ymm11,ymm6,ymm6
  0.01%    0x00000243d89691d6: vmulpd  ymm13,ymm10,ymm10
<-- multiplication ends here, scalar reduction starts -->
  0.03%    0x00000243d89691db: vaddsd  xmm0,xmm0,xmm15
  0.72%    0x00000243d89691e0: vpshufd xmm5,xmm15,0eh
  0.01%    0x00000243d89691e6: vaddsd  xmm0,xmm0,xmm5
  2.14%    0x00000243d89691ea: vextractf128 xmm6,ymm15,1h
  0.03%    0x00000243d89691f0: vaddsd  xmm0,xmm0,xmm6
  3.21%    0x00000243d89691f4: vpshufd xmm5,xmm6,0eh
  0.02%    0x00000243d89691f9: vaddsd  xmm0,xmm0,xmm5
  2.81%    0x00000243d89691fd: vaddsd  xmm0,xmm0,xmm13
  2.82%    0x00000243d8969202: vpshufd xmm5,xmm13,0eh
  0.03%    0x00000243d8969208: vaddsd  xmm0,xmm0,xmm5
  2.87%    0x00000243d896920c: vextractf128 xmm6,ymm13,1h
  0.01%    0x00000243d8969212: vaddsd  xmm0,xmm0,xmm6
  3.03%    0x00000243d8969216: vpshufd xmm5,xmm6,0eh
  0.03%    0x00000243d896921b: vaddsd  xmm0,xmm0,xmm5
  2.94%    0x00000243d896921f: vaddsd  xmm0,xmm0,xmm11
  2.70%    0x00000243d8969224: vpshufd xmm5,xmm11,0eh
  0.03%    0x00000243d896922a: vaddsd  xmm0,xmm0,xmm5
  2.98%    0x00000243d896922e: vextractf128 xmm6,ymm11,1h
  0.01%    0x00000243d8969234: vaddsd  xmm0,xmm0,xmm6
  3.11%    0x00000243d8969238: vpshufd xmm5,xmm6,0eh
  0.03%    0x00000243d896923d: vaddsd  xmm0,xmm0,xmm5
  2.95%    0x00000243d8969241: vaddsd  xmm0,xmm0,xmm9
  2.61%    0x00000243d8969246: vpshufd xmm5,xmm9,0eh
  0.02%    0x00000243d896924c: vaddsd  xmm0,xmm0,xmm5
  2.89%    0x00000243d8969250: vextractf128 xmm6,ymm9,1h
  0.04%    0x00000243d8969256: vaddsd  xmm0,xmm0,xmm6
  3.13%    0x00000243d896925a: vpshufd xmm5,xmm6,0eh
  0.01%    0x00000243d896925f: vaddsd  xmm0,xmm0,xmm5
  2.96%    0x00000243d8969263: vaddsd  xmm0,xmm0,xmm8
  2.83%    0x00000243d8969268: vpshufd xmm4,xmm8,0eh
  0.01%    0x00000243d896926e: vaddsd  xmm0,xmm0,xmm4
  3.00%    0x00000243d8969272: vextractf128 xmm10,ymm8,1h
  0.02%    0x00000243d8969278: vaddsd  xmm0,xmm0,xmm10
  3.13%    0x00000243d896927d: vpshufd xmm4,xmm10,0eh
  0.01%    0x00000243d8969283: vaddsd  xmm0,xmm0,xmm4
  3.01%    0x00000243d8969287: vaddsd  xmm0,xmm0,xmm7
  2.95%    0x00000243d896928b: vpshufd xmm1,xmm7,0eh
  0.02%    0x00000243d8969290: vaddsd  xmm0,xmm0,xmm1
  3.06%    0x00000243d8969294: vextractf128 xmm2,ymm7,1h
  0.01%    0x00000243d896929a: vaddsd  xmm0,xmm0,xmm2
  3.07%    0x00000243d896929e: vpshufd xmm1,xmm2,0eh
  0.02%    0x00000243d89692a3: vaddsd  xmm0,xmm0,xmm1
  3.07%    0x00000243d89692a7: vaddsd  xmm0,xmm0,xmm12
  2.92%    0x00000243d89692ac: vpshufd xmm3,xmm12,0eh
  0.02%    0x00000243d89692b2: vaddsd  xmm0,xmm0,xmm3
  3.11%    0x00000243d89692b6: vextractf128 xmm1,ymm12,1h
  0.01%    0x00000243d89692bc: vaddsd  xmm0,xmm0,xmm1
  3.02%    0x00000243d89692c0: vpshufd xmm3,xmm1,0eh
  0.02%    0x00000243d89692c5: vaddsd  xmm0,xmm0,xmm3
  2.97%    0x00000243d89692c9: vmovdqu ymm1,ymmword ptr [rsp+28h]
  0.02%    0x00000243d89692cf: vaddsd  xmm0,xmm0,xmm1
  3.05%    0x00000243d89692d3: vpshufd xmm2,xmm1,0eh
  0.03%    0x00000243d89692d8: vaddsd  xmm0,xmm0,xmm2
  2.97%    0x00000243d89692dc: vextractf128 xmm14,ymm1,1h
  0.01%    0x00000243d89692e2: vaddsd  xmm0,xmm0,xmm14
  2.99%    0x00000243d89692e7: vpshufd xmm2,xmm14,0eh
  0.02%    0x00000243d89692ed: vaddsd  xmm0,xmm0,xmm2 

The sequential stream code is not as good – it is scalar – but the difference in performance is not as stark as it might be because of the inefficient scalar reduction in the for loop: this is JLS floating point semantics twisting C2’s arm behind its back.

  0.00%    0x0000021a1df54c24: vmovsd  xmm0,qword ptr [rbx+r9*8+48h]
  0.00%    0x0000021a1df54c2b: vmovsd  xmm2,qword ptr [rbx+r9*8+18h]
  0.02%    0x0000021a1df54c32: vmovsd  xmm3,qword ptr [rbx+r9*8+40h]
  2.93%    0x0000021a1df54c39: vmovsd  xmm4,qword ptr [rbx+r9*8+38h]
  0.00%    0x0000021a1df54c40: vmovsd  xmm5,qword ptr [rbx+r9*8+30h]
  0.01%    0x0000021a1df54c47: vmovsd  xmm6,qword ptr [rbx+r9*8+28h]
  0.02%    0x0000021a1df54c4e: vmovsd  xmm7,qword ptr [rbx+r9*8+20h]
  2.99%    0x0000021a1df54c55: vmulsd  xmm8,xmm0,xmm0
  0.00%    0x0000021a1df54c59: vmulsd  xmm0,xmm7,xmm7
           0x0000021a1df54c5d: vmulsd  xmm6,xmm6,xmm6
  0.01%    0x0000021a1df54c61: vmulsd  xmm5,xmm5,xmm5
  2.91%    0x0000021a1df54c65: vmulsd  xmm4,xmm4,xmm4
  0.00%    0x0000021a1df54c69: vmulsd  xmm3,xmm3,xmm3
  0.00%    0x0000021a1df54c6d: vmulsd  xmm2,xmm2,xmm2
  0.02%    0x0000021a1df54c71: vaddsd  xmm1,xmm2,xmm1
  6.10%    0x0000021a1df54c75: vaddsd  xmm0,xmm0,xmm1
  5.97%    0x0000021a1df54c79: vaddsd  xmm0,xmm6,xmm0
 16.22%    0x0000021a1df54c7d: vaddsd  xmm0,xmm5,xmm0
  7.86%    0x0000021a1df54c81: vaddsd  xmm0,xmm4,xmm0
 11.16%    0x0000021a1df54c85: vaddsd  xmm1,xmm3,xmm0
 11.90%    0x0000021a1df54c89: vaddsd  xmm0,xmm8,xmm1

The same code can be seen in SS_ParallelStream. SS_GenerativeSequentialStream is much more interesting because it hasn’t been unrolled – see the interleaved control statements. It is also not vectorised.

           0x0000013c1a639c17: vmovsd  xmm0,qword ptr [rbp+r9*8+10h]
  0.01%    0x0000013c1a639c1e: vmulsd  xmm2,xmm0,xmm0    
  0.01%    0x0000013c1a639c22: test    r8d,r8d
           0x0000013c1a639c25: jne     13c1a639e09h   
           0x0000013c1a639c2b: mov     r10d,dword ptr [r12+rax*8+8h]
           0x0000013c1a639c30: cmp     r10d,0f8022d85h 
           0x0000013c1a639c37: jne     13c1a639e3bh     
  0.01%    0x0000013c1a639c3d: vaddsd  xmm2,xmm1,xmm2
  0.01%    0x0000013c1a639c41: vmovsd  qword ptr [rdi+10h],xmm2
  0.00%    0x0000013c1a639c46: movsxd  r10,r9d
           0x0000013c1a639c49: vmovsd  xmm0,qword ptr [rbp+r10*8+18h]
  0.01%    0x0000013c1a639c50: vmulsd  xmm0,xmm0,xmm0
  0.01%    0x0000013c1a639c54: mov     r10d,dword ptr [r12+rax*8+8h]
  0.00%    0x0000013c1a639c59: cmp     r10d,0f8022d85h
           0x0000013c1a639c60: jne     13c1a639e30h
           0x0000013c1a639c66: vaddsd  xmm0,xmm0,xmm2
  0.02%    0x0000013c1a639c6a: vmovsd  qword ptr [rdi+10h],xmm0
  0.02%    0x0000013c1a639c6f: mov     r10d,r9d
           0x0000013c1a639c72: add     r10d,2h 
           0x0000013c1a639c76: cmp     r10d,r11d
           0x0000013c1a639c79: jnl     13c1a639d96h 
  0.01%    0x0000013c1a639c7f: add     r9d,4h 
  0.02%    0x0000013c1a639c83: vmovsd  xmm1,qword ptr [rbp+r10*8+10h]
  0.00%    0x0000013c1a639c8a: movzx   r8d,byte ptr [rdi+0ch]
  0.00%    0x0000013c1a639c8f: vmulsd  xmm1,xmm1,xmm1
  0.01%    0x0000013c1a639c93: test    r8d,r8d
           0x0000013c1a639c96: jne     13c1a639dfbh
  0.01%    0x0000013c1a639c9c: vaddsd  xmm1,xmm0,xmm1
  0.01%    0x0000013c1a639ca0: vmovsd  qword ptr [rdi+10h],xmm1
  0.02%    0x0000013c1a639ca5: movsxd  r8,r10d
  0.00%    0x0000013c1a639ca8: vmovsd  xmm0,qword ptr [rbp+r8*8+18h]
           0x0000013c1a639caf: vmulsd  xmm0,xmm0,xmm0
           0x0000013c1a639cb3: vaddsd  xmm0,xmm0,xmm1
  0.06%    0x0000013c1a639cb7: vmovsd  qword ptr [rdi+10h],xmm0

So it looks like streams don’t vectorise like good old for loops, and you won’t gain from Stream.parallelStream unless you have humungous arrays (which you might be avoiding for other reasons). This was actually a very nice case for the Stream because optimal code can’t be generated for floating point reductions. What happens with sum of squares for ints? Generating data in an unsurprising way:


  @Benchmark
  public int SS_SequentialStream_Int() {
    return IntStream.of(intData)
            .map(x -> x * x)
            .reduce((x, y) -> x + y)
            .orElse(0);
  }

  @Benchmark
  public int SS_ParallelStream_Int() {
    return IntStream.of(intData)
            .parallel()
            .map(x -> x * x)
            .reduce((x, y) -> x + y)
            .orElse(0);
  }

  @Benchmark
  public int SS_ForLoop_Int() {
    int result = 0;
    for (int i = 0; i < intData.length; ++i) {
      result += intData[i] * intData[i];
    }
    return result;
  }

  @Benchmark
  public int SS_GenerativeSequentialStream_Int() {
    return IntStream.iterate(0, i -> i < size, i -> i + 1)
            .map(i -> intData[i])
            .map(x -> x * x)
            .reduce((x, y) -> x + y)
            .orElse(0);
  }

The landscape has completely changed, thanks to the exploitation of associative arithmetic and the VPHADDD instruction which simplifies the reduction in the for loop.

<-- load -->
  0.00%    0x000001f5cdd8cd30: vmovdqu ymm0,ymmword ptr [rdi+r10*4+0f0h]
  1.93%    0x000001f5cdd8cd3a: vmovdqu ymm1,ymmword ptr [rdi+r10*4+0d0h]
  0.10%    0x000001f5cdd8cd44: vmovdqu ymm2,ymmword ptr [rdi+r10*4+0b0h]
  0.07%    0x000001f5cdd8cd4e: vmovdqu ymm3,ymmword ptr [rdi+r10*4+90h]
  0.05%    0x000001f5cdd8cd58: vmovdqu ymm4,ymmword ptr [rdi+r10*4+70h]
  1.75%    0x000001f5cdd8cd5f: vmovdqu ymm5,ymmword ptr [rdi+r10*4+50h]
  0.08%    0x000001f5cdd8cd66: vmovdqu ymm6,ymmword ptr [rdi+r10*4+30h]
  0.07%    0x000001f5cdd8cd6d: vmovdqu ymm7,ymmword ptr [rdi+r10*4+10h]
<-- multiply -->
  0.01%    0x000001f5cdd8cd74: vpmulld ymm0,ymm0,ymm0
  1.81%    0x000001f5cdd8cd79: vmovdqu ymmword ptr [rsp+28h],ymm0
  0.02%    0x000001f5cdd8cd7f: vpmulld ymm15,ymm7,ymm7
  1.79%    0x000001f5cdd8cd84: vpmulld ymm11,ymm1,ymm1
  0.06%    0x000001f5cdd8cd89: vpmulld ymm8,ymm2,ymm2
  1.82%    0x000001f5cdd8cd8e: vpmulld ymm9,ymm3,ymm3
  0.06%    0x000001f5cdd8cd93: vpmulld ymm10,ymm4,ymm4
  1.79%    0x000001f5cdd8cd98: vpmulld ymm12,ymm5,ymm5
  0.08%    0x000001f5cdd8cd9d: vpmulld ymm6,ymm6,ymm6
<-- vectorised reduce -->
  1.83%    0x000001f5cdd8cda2: vphaddd ymm4,ymm15,ymm15
  0.04%    0x000001f5cdd8cda7: vphaddd ymm4,ymm4,ymm7
  1.85%    0x000001f5cdd8cdac: vextracti128 xmm7,ymm4,1h
  0.07%    0x000001f5cdd8cdb2: vpaddd  xmm4,xmm4,xmm7
  1.78%    0x000001f5cdd8cdb6: vmovd   xmm7,r8d
  0.01%    0x000001f5cdd8cdbb: vpaddd  xmm7,xmm7,xmm4
  0.11%    0x000001f5cdd8cdbf: vmovd   r11d,xmm7
  0.05%    0x000001f5cdd8cdc4: vphaddd ymm4,ymm6,ymm6
  1.84%    0x000001f5cdd8cdc9: vphaddd ymm4,ymm4,ymm7
  5.43%    0x000001f5cdd8cdce: vextracti128 xmm7,ymm4,1h
  0.13%    0x000001f5cdd8cdd4: vpaddd  xmm4,xmm4,xmm7
  4.34%    0x000001f5cdd8cdd8: vmovd   xmm7,r11d
  0.36%    0x000001f5cdd8cddd: vpaddd  xmm7,xmm7,xmm4
  1.40%    0x000001f5cdd8cde1: vmovd   r8d,xmm7
  0.01%    0x000001f5cdd8cde6: vphaddd ymm6,ymm12,ymm12
  2.89%    0x000001f5cdd8cdeb: vphaddd ymm6,ymm6,ymm4
  3.25%    0x000001f5cdd8cdf0: vextracti128 xmm4,ymm6,1h
  0.87%    0x000001f5cdd8cdf6: vpaddd  xmm6,xmm6,xmm4
  6.36%    0x000001f5cdd8cdfa: vmovd   xmm4,r8d
  0.01%    0x000001f5cdd8cdff: vpaddd  xmm4,xmm4,xmm6
  1.69%    0x000001f5cdd8ce03: vmovd   r8d,xmm4
  0.03%    0x000001f5cdd8ce08: vphaddd ymm4,ymm10,ymm10
  1.83%    0x000001f5cdd8ce0d: vphaddd ymm4,ymm4,ymm7
  0.10%    0x000001f5cdd8ce12: vextracti128 xmm7,ymm4,1h
  3.29%    0x000001f5cdd8ce18: vpaddd  xmm4,xmm4,xmm7
  0.72%    0x000001f5cdd8ce1c: vmovd   xmm7,r8d
  0.23%    0x000001f5cdd8ce21: vpaddd  xmm7,xmm7,xmm4
  4.42%    0x000001f5cdd8ce25: vmovd   r11d,xmm7
  0.12%    0x000001f5cdd8ce2a: vphaddd ymm5,ymm9,ymm9
  1.69%    0x000001f5cdd8ce2f: vphaddd ymm5,ymm5,ymm1
  0.12%    0x000001f5cdd8ce34: vextracti128 xmm1,ymm5,1h
  3.28%    0x000001f5cdd8ce3a: vpaddd  xmm5,xmm5,xmm1
  0.22%    0x000001f5cdd8ce3e: vmovd   xmm1,r11d
  0.14%    0x000001f5cdd8ce43: vpaddd  xmm1,xmm1,xmm5
  3.81%    0x000001f5cdd8ce47: vmovd   r11d,xmm1
  0.22%    0x000001f5cdd8ce4c: vphaddd ymm0,ymm8,ymm8
  1.58%    0x000001f5cdd8ce51: vphaddd ymm0,ymm0,ymm3
  0.22%    0x000001f5cdd8ce56: vextracti128 xmm3,ymm0,1h
  2.82%    0x000001f5cdd8ce5c: vpaddd  xmm0,xmm0,xmm3
  0.36%    0x000001f5cdd8ce60: vmovd   xmm3,r11d
  0.20%    0x000001f5cdd8ce65: vpaddd  xmm3,xmm3,xmm0
  4.55%    0x000001f5cdd8ce69: vmovd   r8d,xmm3
  0.10%    0x000001f5cdd8ce6e: vphaddd ymm2,ymm11,ymm11
  1.71%    0x000001f5cdd8ce73: vphaddd ymm2,ymm2,ymm1
  0.09%    0x000001f5cdd8ce78: vextracti128 xmm1,ymm2,1h
  2.91%    0x000001f5cdd8ce7e: vpaddd  xmm2,xmm2,xmm1
  1.57%    0x000001f5cdd8ce82: vmovd   xmm1,r8d
  0.05%    0x000001f5cdd8ce87: vpaddd  xmm1,xmm1,xmm2
  4.84%    0x000001f5cdd8ce8b: vmovd   r11d,xmm1
  0.06%    0x000001f5cdd8ce90: vmovdqu ymm0,ymmword ptr [rsp+28h]
  0.03%    0x000001f5cdd8ce96: vphaddd ymm13,ymm0,ymm0
  1.83%    0x000001f5cdd8ce9b: vphaddd ymm13,ymm13,ymm14
  2.16%    0x000001f5cdd8cea0: vextracti128 xmm14,ymm13,1h
  0.14%    0x000001f5cdd8cea6: vpaddd  xmm13,xmm13,xmm14
  0.09%    0x000001f5cdd8ceab: vmovd   xmm14,r11d
  0.51%    0x000001f5cdd8ceb0: vpaddd  xmm14,xmm14,xmm13

If you’re the guy replacing all the for loops with streams because it’s 2018, you may be committing performance vandalism! That nice declarative API (as opposed to language feature) is at arms length and it really isn’t well optimised yet.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
SS_ForLoop_Int thrpt 1 10 1021725.018981 74264.883362 ops/s 1024
SS_ForLoop_Int thrpt 1 10 129250.855026 5764.608094 ops/s 8192
SS_GenerativeSequentialStream_Int thrpt 1 10 55069.227826 1111.903102 ops/s 1024
SS_GenerativeSequentialStream_Int thrpt 1 10 6769.176830 684.970867 ops/s 8192
SS_ParallelStream_Int thrpt 1 10 20970.387258 719.846643 ops/s 1024
SS_ParallelStream_Int thrpt 1 10 19621.397202 1514.374286 ops/s 8192
SS_SequentialStream_Int thrpt 1 10 586847.001223 22390.512706 ops/s 1024
SS_SequentialStream_Int thrpt 1 10 87620.959677 3437.083075 ops/s 8192

Parallel streams might not be the best thing to reach for.