# How much Algebra does C2 Know? Part 2: Distributivity

In part one of this series of posts, I looked at how important associativity and independence are for fast loops. C2 seems to utilise these properties to generate unrolled and pipelined machine code for loops, achieving higher throughput even in cases where the kernel of the loop is 3x slower according to vendor advertised instruction throughputs. C2 has a weird and wonderful relationship with distributivity, and hints from the programmer can both and help hinder the generation of good quality machine code.

### Viability and Correctness

Distributivity is the simple notion of factoring out brackets. Is this, in general, a viable loop rewrite strategy? This can be utilised to transform the method `Scale` into `FactoredScale`, both of which perform floating point arithmetic:

``````
@CompilerControl(CompilerControl.Mode.DONT_INLINE)
@Benchmark
public double Scale(DoubleData state) {
double value = 0D;
double[] data = state.data1;
for (int i = 0; i < data.length; ++i) {
value += 3.14159 * data[i];
}
return value;
}

@CompilerControl(CompilerControl.Mode.DONT_INLINE)
@Benchmark
public double FactoredScale(DoubleData state) {
double value = 0D;
double[] data = state.data1;
for (int i = 0; i < data.length; ++i) {
value += data[i];
}
return 3.14159 * value;
}
``````

Running the project at github with the argument `--include .*scale.*`, there may be a performance gain to be had from this rewrite, but it isn’t clear cut:

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
FactoredScale thrpt 1 10 7.011606 0.274742 ops/ms 100000
FactoredScale thrpt 1 10 0.621515 0.026853 ops/ms 1000000
Scale thrpt 1 10 6.962434 0.240180 ops/ms 100000
Scale thrpt 1 10 0.671042 0.011686 ops/ms 1000000

With the real numbers it would be completely valid, but floating point arithmetic is not associative. Joseph Darcy explains why in this deep dive on floating point semantics. Broken associativity of addition entails broken distributivity of any operation over it, so the two loops are not equivalent, and they give different outputs (e.g. 15662.513298516365 vs 15662.51329851632 for one sample input). The rewrite isn’t correct even for floating point data, so it isn’t an optimisation that could be applied in good faith, except in a very small number of cases. You have to rewrite the loop yourself and figure out if the small but inevitable differences are acceptable.

### Counterintuitive Performance

Integer multiplication is distributive over addition, and we can check if C2 does this rewrite by running the same code with 32 bit integer values, for now fixing a scale factor of 10 (which seems like an innocuous value, no?)

``````
@CompilerControl(CompilerControl.Mode.DONT_INLINE)
@Benchmark
public int Scale_Int(IntData state) {
int value = 0;
int[] data = state.data1;
for (int i = 0; i < data.length; ++i) {
value += 10 * data[i];
}
return value;
}

@CompilerControl(CompilerControl.Mode.DONT_INLINE)
@Benchmark
public int FactoredScale_Int(IntData state) {
int value = 0;
int[] data = state.data1;
for (int i = 0; i < data.length; ++i) {
value += data[i];
}
return 10 * value;
}
``````

The results are fascinating:

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
FactoredScale_Int thrpt 1 10 28.339699 0.608075 ops/ms 100000
FactoredScale_Int thrpt 1 10 2.392579 0.506413 ops/ms 1000000
Scale_Int thrpt 1 10 33.335721 0.295334 ops/ms 100000
Scale_Int thrpt 1 10 2.838242 0.448213 ops/ms 1000000

The code is doing thousands more multiplications in less time when the multiplication is not factored out of the loop. So what the devil is going on? Inspecting the assembly for the faster loop is revealing

```  0x000001c89e499400: vmovdqu ymm8,ymmword ptr [rbp+r13*4+10h]
0x000001c89e499407: movsxd  r10,r13d
0x000001c89e49940a: vmovdqu ymm9,ymmword ptr [rbp+r10*4+30h]
0x000001c89e499411: vmovdqu ymm13,ymmword ptr [rbp+r10*4+0f0h]
0x000001c89e49941b: vmovdqu ymm12,ymmword ptr [rbp+r10*4+50h]
0x000001c89e499422: vmovdqu ymm4,ymmword ptr [rbp+r10*4+70h]
0x000001c89e499429: vmovdqu ymm3,ymmword ptr [rbp+r10*4+90h]
0x000001c89e499433: vmovdqu ymm2,ymmword ptr [rbp+r10*4+0b0h]
0x000001c89e49943d: vmovdqu ymm0,ymmword ptr [rbp+r10*4+0d0h]
0x000001c89e499447: vpslld  ymm11,ymm8,1h
0x000001c89e49944d: vpslld  ymm1,ymm0,1h
0x000001c89e499452: vpslld  ymm0,ymm0,3h
0x000001c89e49945b: vpslld  ymm0,ymm2,3h
0x000001c89e499460: vpslld  ymm7,ymm3,3h
0x000001c89e499465: vpslld  ymm10,ymm4,3h
0x000001c89e49946a: vpslld  ymm15,ymm12,3h
0x000001c89e499470: vpslld  ymm14,ymm13,3h
0x000001c89e499476: vpslld  ymm1,ymm9,3h
0x000001c89e49947c: vpslld  ymm2,ymm2,1h
0x000001c89e499485: vpslld  ymm0,ymm3,1h
0x000001c89e49948e: vpslld  ymm0,ymm4,1h
0x000001c89e499497: vpslld  ymm0,ymm12,1h
0x000001c89e4994a1: vpslld  ymm0,ymm13,1h
0x000001c89e4994ab: vpslld  ymm0,ymm9,1h
0x000001c89e4994b5: vpslld  ymm0,ymm8,3h
0x000001c89e4994ca: vextracti128 xmm3,ymm0,1h
0x000001c89e4994d4: vmovd   xmm3,ebx
0x000001c89e4994dc: vmovd   r10d,xmm3
0x000001c89e4994eb: vextracti128 xmm3,ymm0,1h
0x000001c89e4994f5: vmovd   xmm3,r10d
0x000001c89e4994fe: vmovd   r11d,xmm3
0x000001c89e49950d: vextracti128 xmm0,ymm2,1h
0x000001c89e499517: vmovd   xmm0,r11d
0x000001c89e499520: vmovd   r10d,xmm0
0x000001c89e49952f: vextracti128 xmm3,ymm0,1h
0x000001c89e499539: vmovd   xmm3,r10d
0x000001c89e499542: vmovd   r11d,xmm3
0x000001c89e499551: vextracti128 xmm0,ymm2,1h
0x000001c89e49955b: vmovd   xmm0,r11d
0x000001c89e499564: vmovd   r10d,xmm0
0x000001c89e499573: vextracti128 xmm3,ymm0,1h
0x000001c89e49957d: vmovd   xmm3,r10d
0x000001c89e499586: vmovd   r11d,xmm3
0x000001c89e499595: vextracti128 xmm0,ymm2,1h
0x000001c89e49959f: vmovd   xmm0,r11d
0x000001c89e4995a8: vmovd   r10d,xmm0
0x000001c89e4995b7: vextracti128 xmm1,ymm2,1h
0x000001c89e4995c1: vmovd   xmm1,r10d
0x000001c89e4995ca: vmovd   ebx,xmm1
```

The loop is aggressively unrolled, pipelined, and vectorised. Moreover, the multiplication by ten results not in a multiplication but two left shifts (see `VPSLLD`) and an addition. Note that `x << 1 + x << 3 = x * 10` and C2 seems to know it; this rewrite can be applied because it can be proven statically that the factor is always 10. The “optimised” loop doesn’t vectorise at all (and I have no idea why not – isn’t this a bug? Yes it is.)

```  0x000002bbebeda3c8: add     ebx,dword ptr [rbp+r8*4+14h]
0x000002bbebeda3ef: cmp     r13d,r11d
0x000002bbebeda3f2: jl      2bbebeda3c0h
```

This is a special case: data is usually dynamic and variable, so the loop cannot always be proven to be equivalent to a linear combination of bit shifts. The routine is compiled for all possible parameters, not just statically contrived cases like the one above, so you may never see this assembly in the wild. However, even with random factors, the slow looking loop is aggressively optimised in a way the hand “optimised” code is not:

``````
@CompilerControl(CompilerControl.Mode.DONT_INLINE)
@Benchmark
public int Scale_Int_Dynamic(ScaleState state) {
int value = 0;
int[] data = state.data;
int factor = state.randomFactor();
for (int i = 0; i < data.length; ++i) {
value += factor * data[i];
}
return value;
}

@CompilerControl(CompilerControl.Mode.DONT_INLINE)
@Benchmark
public int FactoredScale_Int_Dynamic(ScaleState state) {
int value = 0;
int[] data = state.data;
int factor = state.randomFactor();
for (int i = 0; i < data.length; ++i) {
value += data[i];
}
return factor * value;
}
``````

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
FactoredScale_Int_Dynamic thrpt 1 10 26.100439 0.340069 ops/ms 100000
FactoredScale_Int_Dynamic thrpt 1 10 1.918011 0.297925 ops/ms 1000000
Scale_Int_Dynamic thrpt 1 10 30.219809 2.977389 ops/ms 100000
Scale_Int_Dynamic thrpt 1 10 2.314159 0.378442 ops/ms 1000000

Far from seeking to exploit distributivity to reduce the number of multiplication instructions, it seems to almost embrace the extraneous operations as metadata to drive optimisations. The assembly for `Scale_Int_Dynamic` confirms this (it shows vectorised multiplication, not shifts, within the loop):

```
0x000001f5ca2fa200: vmovdqu ymm0,ymmword ptr [r13+r14*4+10h]
0x000001f5ca2fa207: vpmulld ymm11,ymm0,ymm2
0x000001f5ca2fa20c: movsxd  r10,r14d
0x000001f5ca2fa20f: vmovdqu ymm0,ymmword ptr [r13+r10*4+30h]
0x000001f5ca2fa216: vmovdqu ymm1,ymmword ptr [r13+r10*4+0f0h]
0x000001f5ca2fa220: vmovdqu ymm3,ymmword ptr [r13+r10*4+50h]
0x000001f5ca2fa227: vmovdqu ymm7,ymmword ptr [r13+r10*4+70h]
0x000001f5ca2fa22e: vmovdqu ymm6,ymmword ptr [r13+r10*4+90h]
0x000001f5ca2fa238: vmovdqu ymm5,ymmword ptr [r13+r10*4+0b0h]
0x000001f5ca2fa242: vmovdqu ymm4,ymmword ptr [r13+r10*4+0d0h]
0x000001f5ca2fa24c: vpmulld ymm9,ymm0,ymm2
0x000001f5ca2fa251: vpmulld ymm4,ymm4,ymm2
0x000001f5ca2fa256: vpmulld ymm5,ymm5,ymm2
0x000001f5ca2fa25b: vpmulld ymm6,ymm6,ymm2
0x000001f5ca2fa260: vpmulld ymm8,ymm7,ymm2
0x000001f5ca2fa265: vpmulld ymm10,ymm3,ymm2
0x000001f5ca2fa26a: vpmulld ymm3,ymm1,ymm2
0x000001f5ca2fa279: vextracti128 xmm0,ymm1,1h
0x000001f5ca2fa283: vmovd   xmm0,ebx
0x000001f5ca2fa28b: vmovd   r10d,xmm0
0x000001f5ca2fa29a: vextracti128 xmm0,ymm1,1h
0x000001f5ca2fa2a4: vmovd   xmm0,r10d
0x000001f5ca2fa2bc: vextracti128 xmm1,ymm0,1h
0x000001f5ca2fa2c6: vmovd   xmm1,r11d
0x000001f5ca2fa2cf: vmovd   r10d,xmm1
0x000001f5ca2fa2de: vextracti128 xmm0,ymm1,1h
0x000001f5ca2fa2e8: vmovd   xmm0,r10d
0x000001f5ca2fa2f1: vmovd   r11d,xmm0
0x000001f5ca2fa300: vextracti128 xmm1,ymm0,1h
0x000001f5ca2fa30a: vmovd   xmm1,r11d
0x000001f5ca2fa313: vmovd   r10d,xmm1
0x000001f5ca2fa322: vextracti128 xmm0,ymm1,1h
0x000001f5ca2fa32c: vmovd   xmm0,r10d
0x000001f5ca2fa335: vmovd   r11d,xmm0
0x000001f5ca2fa344: vextracti128 xmm1,ymm0,1h
0x000001f5ca2fa34e: vmovd   xmm1,r11d
0x000001f5ca2fa357: vmovd   r10d,xmm1
0x000001f5ca2fa366: vextracti128 xmm7,ymm1,1h
0x000001f5ca2fa370: vmovd   xmm7,r10d
0x000001f5ca2fa379: vmovd   ebx,xmm7
```

There are two lessons to be learnt here. The first is that what you see is not what you get. The second is about the correctness of asymptotic analysis. If hierarchical cache renders asymptotic analysis bullshit (linear time but cache friendly algorithms can, and do, outperform logarithmic algorithms with cache misses), optimising compilers render the field practically irrelevant.

# Zeroing Negative Values in Arrays Efficiently

Replacing negatives with zeroes in large arrays of values is a primitive function of several complex financial risk measures, including potential future exposure (PFE) and the liquidity coverage ratio (LCR). While this is not an interesting operation by any stretch of the imagination, it is useful and there is significant benefit in its performance. This is an operation that can be computed very efficiently using the instruction `VMAXPD`. For Intel Xeon processors, this instruction requires half a cycle to calculate and has a latency (how long before another instruction can use its result) of four cycles. There is currently no way to trick Java into using this instruction for this simple operation, though there is a placeholder implementation on the current `DoubleVector` prototype in Project Panama which may do so.

### C++ Intel Intrinsics

It’s possible to target instructions from different processor vendors, in my case Intel, by using intrinsic functions which expose instructions as high level functions. The code looks incredibly ugly but it works. Here is a C++ function for 256 bit ymm registers:

``````
void zero_negatives(const double* source, double* target, const size_t length) {
for (size_t i = 0; i + 3 < length; i += 4) {
__m256d vector = _mm256_load_pd(source + i);
__m256d zeroed = _mm256_max_pd(vector, _mm256_setzero_pd());
_mm256_storeu_pd(target + i, zeroed);
}
}
``````

The function loads doubles into 256 bit vectors, within each vector replaces the negative values with zero, and writes them back into an array. It generates the following assembly code (which, incidentally, is less of a shit show to access than in Java):

``````
void zero_negatives(const double* source, double* target, const size_t length) {
00007FF746EE5110  mov         qword ptr [rsp+18h],r8
00007FF746EE5115  mov         qword ptr [rsp+10h],rdx
00007FF746EE511A  mov         qword ptr [rsp+8],rcx
00007FF746EE511F  push        r13
00007FF746EE5121  push        rbp
00007FF746EE5122  push        rdi
00007FF746EE5123  sub         rsp,250h
00007FF746EE512A  mov         r13,rsp
00007FF746EE512D  lea         rbp,[rsp+20h]
00007FF746EE5132  and         rbp,0FFFFFFFFFFFFFFE0h
00007FF746EE5136  mov         rdi,rsp
00007FF746EE5139  mov         ecx,94h
00007FF746EE513E  mov         eax,0CCCCCCCCh
00007FF746EE5143  rep stos    dword ptr [rdi]
00007FF746EE5145  mov         rcx,qword ptr [rsp+278h]
for (size_t i = 0; i + 3 < length; i += 4) {
00007FF746EE514D  mov         qword ptr [rbp+8],0
00007FF746EE5155  jmp         zero_negatives+53h (07FF746EE5163h)
00007FF746EE5157  mov         rax,qword ptr [rbp+8]
00007FF746EE515F  mov         qword ptr [rbp+8],rax
00007FF746EE5163  mov         rax,qword ptr [rbp+8]
00007FF746EE516B  cmp         rax,qword ptr [length]
00007FF746EE5172  jae         zero_negatives+0DDh (07FF746EE51EDh)
__m256d vector = _mm256_load_pd(source + i);
00007FF746EE5174  mov         rax,qword ptr [source]
00007FF746EE517B  mov         rcx,qword ptr [rbp+8]
00007FF746EE517F  lea         rax,[rax+rcx*8]
00007FF746EE5183  vmovupd     ymm0,ymmword ptr [rax]
00007FF746EE5187  vmovupd     ymmword ptr [rbp+180h],ymm0
00007FF746EE518F  vmovupd     ymm0,ymmword ptr [rbp+180h]
00007FF746EE5197  vmovupd     ymmword ptr [rbp+40h],ymm0
__m256d zeroed = _mm256_max_pd(vector, _mm256_setzero_pd());
00007FF746EE519C  vxorpd      xmm0,xmm0,xmm0
00007FF746EE51A0  vmovupd     ymmword ptr [rbp+200h],ymm0
00007FF746EE51A8  vmovupd     ymm0,ymmword ptr [rbp+40h]
00007FF746EE51B5  vmovupd     ymmword ptr [rbp+1C0h],ymm0
00007FF746EE51BD  vmovupd     ymm0,ymmword ptr [rbp+1C0h]
00007FF746EE51C5  vmovupd     ymmword ptr [rbp+80h],ymm0
_mm256_storeu_pd(target + i, zeroed);
00007FF746EE51CD  mov         rax,qword ptr [target]
00007FF746EE51D4  mov         rcx,qword ptr [rbp+8]
00007FF746EE51D8  lea         rax,[rax+rcx*8]
00007FF746EE51DC  vmovupd     ymm0,ymmword ptr [rbp+80h]
00007FF746EE51E4  vmovupd     ymmword ptr [rax],ymm0
}
00007FF746EE51E8  jmp         zero_negatives+47h (07FF746EE5157h)
}
00007FF746EE51ED  lea         rsp,[r13+250h]
00007FF746EE51F4  pop         rdi
00007FF746EE51F5  pop         rbp
00007FF746EE51F6  pop         r13
00007FF746EE51F8  ret
``````

This code is noticeably fast. I measured the throughput averaged over 1000 iterations, with an array of 10 million doubles (800MB) uniformly distributed between +/- 1E7, to quantify the throughput in GB/s and iterations/s. This code does between 4.5 and 5 iterations per second, which translates to processing approximately 4GB/s. This seems high, and since I am unaware of best practices in C++, if the measurement is flawed, I would gratefully be educated in the comments.

``````
void benchmark() {
const size_t length = 1E8;
double* values = new double[length];
fill_array(values, length);
double* zeroed = new double[length];
auto start = std::chrono::high_resolution_clock::now();
int iterations = 1000;
for (int i = 0; i < iterations; ++i) {
zero_negatives(values, zeroed, length);
}
auto end = std::chrono::high_resolution_clock::now();
auto nanos = std::chrono::duration_cast<std::chrono::nanoseconds>(end - start).count();
double thrpt_s = (iterations * 1E9) / nanos;
double thrpt_gbps = (thrpt_s * sizeof(double) * length) / 1E9;
std::cout << thrpt_s << "/s" << std::endl;
std::cout << thrpt_gbps << "GB/s" << std::endl;
delete[] values;
delete[] zeroed;
}
``````

While I am sure there are various ways an expert could tweak this for performance, this code can’t get much faster unless there are 512 bit zmm registers available, in which case it would be wasteful. While the code looks virtually the same for AVX512 (just replace “256” with “512”), portability and efficiency are at odds. Handling the mess of detecting the best instruction set for the deployed architecture is the main reason for using Java in performance sensitive (but not critical) applications. But this is not the code the JVM generates.

### Java Auto-Vectorisation (Play Your Cards Right)

There is currently no abstraction modelling vectorisation in Java. The only access available is if the compiler engineers implement an intrinsic, or auto-vectorisation, which will try, and sometimes succeed admirably, to translate your code to a good vector implementation. There is currently a prototype project for explicit vectorisation in Project Panama. There are a few ways to skin this cat, and it’s worth looking at the code they generate and the throughput available from each approach.

There is a choice between copying the array and zeroing out the negatives, and allocating a new array and only writing the non-negative values. There is another choice between an if statement and branchless code using `Math.max`. This results in the following four implementations which I measure on comparable data to the C++ benchmark (10 million doubles, normally distributed with mean zero). To be fair to the Java code, as in the C++ benchmarks, the cost of allocation is isolated by writing into an array pre-allocated once per benchmark. This penalises the approaches where the array is copied first and then zeroed wherever the value is negative. The code is online at github.

``````
@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE)
double[] data = state.data;
double[] result = state.target;
System.arraycopy(data, 0, result, 0, data.length);
for (int i = 0; i < result.length; ++i) {
if (result[i] < 0D) {
result[i] = 0D;
}
}
return result;
}

@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE)
public double[] BranchyNewArray(ArrayWithNegatives state) {
double[] data = state.data;
double[] result = state.target;
for (int i = 0; i < result.length; ++i) {
result[i] = data[i] < 0D ? 0D : data[i];
}
return result;
}

@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE)
public double[] NewArray(ArrayWithNegatives state) {
double[] data = state.data;
double[] result = state.target;
for (int i = 0; i < result.length; ++i) {
result[i] = Math.max(data[i], 0D);
}
return result;
}

@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE)
double[] data = state.data;
double[] result = state.target;
System.arraycopy(data, 0, result, 0, data.length);
for (int i = 0; i < result.length; ++i) {
result[i] = Math.max(result[i], 0D);
}
return result;
}
``````

None of these implementations comes close to the native code above. The best implementation performs 1.8 iterations per second which equates to processing approximately 1.4GB/s, vastly inferior to the 4GB/s achieved with Intel intrinsics. The results are below:

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit
BranchyCopyAndMask thrpt 1 10 1.314845 0.061662 ops/s
BranchyNewArray thrpt 1 10 1.802673 0.061835 ops/s
CopyAndMask thrpt 1 10 1.146630 0.018903 ops/s
NewArray thrpt 1 10 1.357020 0.116481 ops/s

As an aside, there is a very interesting observation to make, worthy of its own post: if the array consists only of positive values, the “branchy” implementations run very well, at speeds comparable to the `zero_negatives` (when it ran with 50% negatives). The ratio of branch hits to misses is an orthogonal explanatory variable, and the input data, while I often don’t think about it enough, is very important.

I only looked at the assembly emitted for the fastest version (BranchyNewArray) and it doesn’t look anything like `zero_negatives`, though it does use some vectorisation – as pointed out by Daniel Lemire in the comments, this code has probably not been vectorised and is probably using SSE2 (indeed only quad words are loaded into 128 bit registers):

```  0x000002ae309c3d5c: vmovsd  xmm0,qword ptr [rdx+rax*8+10h]
0x000002ae309c3d62: vxorpd  xmm1,xmm1,xmm1
0x000002ae309c3d66: vucomisd xmm0,xmm1
```

I don’t really understand, and haven’t thought about, the intent of the emitted code, but it makes extensive use of the instruction `VUCOMISD` for comparisons with zero, which has a lower latency but lower throughput than `VMAXPD`. It would certainly be interesting to see how Project Panama does this. Perhaps this should just be made available as a fail-safe intrinsic like `Arrays.mismatch`?

# Project Panama and Population Count

Project Panama introduces a new interface `Vector`, where the specialisation for `long` looks like a promising substrate for an explicitly vectorised bit set. Bit sets are useful for representing composable predicates over data sets. One obvious omission on this interface, required for an adequate implementation of a bit set, is a bit count, otherwise known as population count. Perhaps this is because the vector API aims to generalise across primitive types, whereas population count is only meaningful for integral types. Even so, if `Vector` can be interpreted as a wider integer, then it would be consistent to add this to the interface. If the method existed, what possible implementation could it have?

In x86, the population count of a 64 bit register is computed by the `POPCNT` instruction, which is exposed in Java as an intrinsic in `Long.bitCount`. There is no SIMD equivalent in any extension set until VPOPCNTD/VPOPCNTQ in AVX-512. Very few processors (at the time of writing) support AVX-512, and only the Knights Mill processor supports this extension; there are not even Intel intrinsics exposing these instructions yet.

The algorithm for vectorised population count adopted by the clang compiler is outlined in this paper, which develops on an algorithm designed for 128 bit registers and SSE instructions, presented by Wojciech Muła on his blog in 2008. This approach is shown in the paper to outperform scalar code using `POPCNT` and 64 bit registers, almost doubling throughput when 256 bit ymm registers are available. The core algorithm (taken from figure 10 in the paper) returns a vector of four 64 bit counts, which can then be added together in a variety of ways to form a population count, proceeds as follows:

``````
// The Muła Function
__m256i count(__m256i v) {
__m256i lookup = _mm256_setr_epi8(
0, 1, 1, 2, 1, 2, 2, 3,
1, 2, 2, 3, 2, 3, 3, 4,
0, 1, 1, 2, 1, 2, 2, 3,
1, 2, 2, 3, 2, 3, 3, 4);
__m256i hi = _mm256_and_si256(_mm256_srli_epi32(v, 4), low_mask);
__m256i popcnt1 = _mm256_shuffle_epi8(lookup, lo);
__m256i popcnt2 = _mm256_shuffle_epi8(lookup, hi);
}
``````

If you are struggling to read the code above, you are not alone. I haven’t programmed in C++ for several years – it’s amazing how nice the names in civilised languages like Java and python (and even bash) are compared to the black magic above. There is some logic to the naming though: read page 5 of the manual. You can also read an accessible description of some of the functions used in this blog post.

The basic idea starts from storing the population counts for each possible byte value in a lookup table, which can be looked up using bit level parallelism and ultimately added up. For efficiency’s sake, instead of bytes, 4 bit nibbles are used, which is why you only see numbers 0-4 in the lookup table. Various, occasionally obscure, optimisations are applied resulting in the magic numbers at the the top of the function. A large chunk of the paper is devoted to their derivation: if you are interested, go and read the paper – I could not understand the intent of the code at all until reading the paper twice, especially section 2.

The points I find interesting are:

• This algorithm exists
• It uses instructions all modern commodity processors have
• It is fast
• It is in use

Could this be implemented in the JVM as an intrinsic and exposed on `Vector`?

# New Methods in Java 9: Math.fma and Arrays.mismatch

There are two noteworthy new methods in Java 9: `Arrays.mismatch` and `Math.fma`.

#### Arrays.mismatch

This method takes two primitive arrays, and returns the index of the first differing values. This effectively computes the longest common prefix of the two arrays. This is really quite useful, mostly for text processing but also for Bioinformatics (protein sequencing and so on, much more interesting than the sort of thing I work on). Having worked extensively with Apache HBase (where a vast majority of the API involves manipulating byte arrays) I can think of lots of less interesting use cases for this method.

Looking carefully, you can see that the method calls into the internal `ArraysSupport` utility class, which will try to perform a vectorised mismatch (an intrinsic candidate). Since this will use AVX instructions, this is very fast; much faster than a handwritten loop.

Let’s measure the boost versus a handwritten loop, testing across a range of common prefices and array lengths of `byte[]`.

``````
@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE)
public void Mismatch_Intrinsic(BytePrefixData data, Blackhole bh) {
bh.consume(Arrays.mismatch(data.data1, data.data2));
}

@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE)
public void Mismatch_Handwritten(BytePrefixData data, Blackhole bh) {
byte[] data1 = data.data1;
byte[] data2 = data.data2;
int length = Math.min(data1.length, data2.length);
int mismatch = -1;
for (int i = 0; i < length; ++i) {
if (data1[i] != data2[i]) {
mismatch = i;
break;
}
}
bh.consume(mismatch);
}
``````

The results speak for themselves. Irritatingly, there is some duplication in output because I haven’t figured out how to make JMH use a subset of the Cartesian product of its parameters.

Benchmark (prefix) (size) Mode Cnt Score Error Units
Mismatch_Handwritten 10 100 thrpt 10 22.360 ± 0.938 ops/us
Mismatch_Handwritten 10 1000 thrpt 10 2.459 ± 0.256 ops/us
Mismatch_Handwritten 10 10000 thrpt 10 0.255 ± 0.009 ops/us
Mismatch_Handwritten 100 100 thrpt 10 22.763 ± 0.869 ops/us
Mismatch_Handwritten 100 1000 thrpt 10 2.690 ± 0.044 ops/us
Mismatch_Handwritten 100 10000 thrpt 10 0.273 ± 0.008 ops/us
Mismatch_Handwritten 1000 100 thrpt 10 24.970 ± 0.713 ops/us
Mismatch_Handwritten 1000 1000 thrpt 10 2.791 ± 0.066 ops/us
Mismatch_Handwritten 1000 10000 thrpt 10 0.281 ± 0.007 ops/us
Mismatch_Intrinsic 10 100 thrpt 10 89.169 ± 2.759 ops/us
Mismatch_Intrinsic 10 1000 thrpt 10 26.995 ± 0.501 ops/us
Mismatch_Intrinsic 10 10000 thrpt 10 3.553 ± 0.065 ops/us
Mismatch_Intrinsic 100 100 thrpt 10 83.037 ± 5.590 ops/us
Mismatch_Intrinsic 100 1000 thrpt 10 26.249 ± 0.714 ops/us
Mismatch_Intrinsic 100 10000 thrpt 10 3.523 ± 0.122 ops/us
Mismatch_Intrinsic 1000 100 thrpt 10 87.921 ± 6.566 ops/us
Mismatch_Intrinsic 1000 1000 thrpt 10 25.812 ± 0.442 ops/us
Mismatch_Intrinsic 1000 10000 thrpt 10 4.177 ± 0.059 ops/us

Why is there such a big difference? Look at how the score decreases as a function of array length, even when the common prefix, and therefore the number of comparisons required, is small: clearly the performance of this algorithm depends on the efficiency of memory access. `Arrays.mismatch` optimises this, XORing 256 bits at a time.

``````  3.02%    1.83%        0x00007fea85ba50a0: push   %rbp
0.14%    0.15%        0x00007fea85ba50a1: mov    %rsp,%rbp
2.84%    4.70%        0x00007fea85ba50a4: shl    %cl,%rdx
0.41%    0.38%        0x00007fea85ba50a7: xor    %rax,%rax
2.72%    4.49%        0x00007fea85ba50aa: cmp    \$0x8,%rdx
0x00007fea85ba50ae: je     Stub::vectorizedMismatch+148 0x00007fea85ba5134
0.19%    0.17%        0x00007fea85ba50b4: jl     Stub::vectorizedMismatch+182 0x00007fea85ba5156
0.18%    0.16%        0x00007fea85ba50ba: cmp    \$0x10,%rdx
0.00%           ╭     0x00007fea85ba50be: je     Stub::vectorizedMismatch+103 0x00007fea85ba5107
0.12%    0.10%  │     0x00007fea85ba50c4: jl     Stub::vectorizedMismatch+148 0x00007fea85ba5134
2.80%    1.69%  │     0x00007fea85ba50ca: cmp    \$0x20,%rdx
│╭    0x00007fea85ba50ce: jl     Stub::vectorizedMismatch+97 0x00007fea85ba5101
0.09%    0.08%  ││    0x00007fea85ba50d0: sub    \$0x20,%rdx
0.18%    0.18%  ││↗   0x00007fea85ba50d4: vmovdqu (%rdi,%rax,1),%ymm0
0.15%    0.15%  │││   0x00007fea85ba50d9: vmovdqu (%rsi,%rax,1),%ymm1
8.63%    9.44%  │││   0x00007fea85ba50de: vpxor  %ymm1,%ymm0,%ymm2
2.63%    2.96%  │││   0x00007fea85ba50e2: vptest %ymm2,%ymm2
3.49%    3.84%  │││   0x00007fea85ba50e7: jne    Stub::vectorizedMismatch+291 0x00007fea85ba51c3
12.19%   14.10%  │││   0x00007fea85ba50ed: add    \$0x20,%rax
0.30%    0.32%  │││   0x00007fea85ba50f1: sub    \$0x20,%rdx
││╰   0x00007fea85ba50f5: jge    Stub::vectorizedMismatch+52 0x00007fea85ba50d4
0.00%    0.00%  ││    0x00007fea85ba50f7: add    \$0x20,%rdx
││    0x00007fea85ba50fb: je     Stub::vectorizedMismatch+363 0x00007fea85ba520b
0.00%    0.00%  │↘    0x00007fea85ba5101: cmp    \$0x10,%rdx
0.00%           │  ╭  0x00007fea85ba5105: jl     Stub::vectorizedMismatch+142 0x00007fea85ba512e
↘  │  0x00007fea85ba5107: vmovdqu (%rdi,%rax,1),%xmm0
│  0x00007fea85ba510c: vmovdqu (%rsi,%rax,1),%xmm1
│  0x00007fea85ba5111: vpxor  %xmm1,%xmm0,%xmm2
│  0x00007fea85ba5115: vptest %xmm2,%xmm2
│  0x00007fea85ba511a: jne    Stub::vectorizedMismatch+319 0x00007fea85ba51df
│  0x00007fea85ba5124: sub    \$0x10,%rdx
│  0x00007fea85ba5128: je     Stub::vectorizedMismatch+363 0x00007fea85ba520b
2.91%    2.96%     ↘  0x00007fea85ba512e: cmp    \$0x8,%rdx
0x00007fea85ba5132: jl     Stub::vectorizedMismatch+182 0x00007fea85ba5156
``````

The code for this benchmark is at github.

#### Math.fma

In comparison to users of some languages, Java programmers are lackadaisical about floating point errors. It’s a good job that historically Java hasn’t been considered suitable for the implementation of numerical algorithms. But all of a sudden there is a revolution of data science on the JVM, albeit mostly driven by the Scala community, with JVM implementations of structures like recurrent neural networks abounding. It matters less for machine learning than root finding, but how accurate can these implementations be without JVM level support for minimising the propagation floating point errors? With `Math.fma` this is improving, by allowing two common operations to be performed before rounding.

`Math.fma` fuses a multiplication and an addition into a single floating point operation to compute expressions like $ab + c$. This has two key benefits:

1. There’s only one operation, and only one rounding error
2. This is explicitly supported in AVX2 by the VFMADD* instructions

#### Newton’s Method

To investigate any superior suppression of floating point errors, I use a toy implementation of Newton’s method to compute the root of a quadratic equation, which any teenager could calculate analytically (the error is easy to quantify).

I compare these two implementations for $4x^2 - 12x + 9$ (there is a repeated root at 1.5) to get an idea for the error (defined by $|1.5 - x_n|$) after a large number of iterations.

I implemented this using FMA:

``````
public class NewtonsMethodFMA {

private final double[] coefficients;

public NewtonsMethodFMA(double[] coefficients) {
this.coefficients = coefficients;
}

public double evaluateF(double x) {
double f = 0D;
int power = coefficients.length - 1;
for (int i = 0; i < coefficients.length; ++i) {
f = Math.fma(coefficients[i], Math.pow(x, power--), f);
}
return f;
}

public double evaluateDF(double x) {
double df = 0D;
int power = coefficients.length - 2;
for (int i = 0; i < coefficients.length - 1; ++i) {
df = Math.fma((power + 1) * coefficients[i],  Math.pow(x, power--), df);
}
return df;
}

public double solve(double initialEstimate, int maxIterations) {
double result = initialEstimate;
for (int i = 0; i < maxIterations; ++i) {
result -= evaluateF(result)/evaluateDF(result);
}
return result;
}
}
``````

And an implementation with normal operations:

``````

public class NewtonsMethod {

private final double[] coefficients;

public NewtonsMethod(double[] coefficients) {
this.coefficients = coefficients;
}

public double evaluateF(double x) {
double f = 0D;
int power = coefficients.length - 1;
for (int i = 0; i < coefficients.length; ++i) {
f += coefficients[i] * Math.pow(x, power--);
}
return f;
}

public double evaluateDF(double x) {
double df = 0D;
int power = coefficients.length - 2;
for (int i = 0; i < coefficients.length - 1; ++i) {
df += (power + 1) * coefficients[i] * Math.pow(x, power--);
}
return df;
}

public double solve(double initialEstimate, int maxIterations) {
double result = initialEstimate;
for (int i = 0; i < maxIterations; ++i) {
result -= evaluateF(result)/evaluateDF(result);
}
return result;
}
}
``````

When I run this code for 1000 iterations, the FMA version results in 1.5000000083575202, whereas the vanilla version results in 1.500000017233207. It’s completely unscientific, but seems plausible and confirms my prejudice so… In fact, it’s not that simple, and over a range of initial values, there is only a very small difference in FMA’s favour. There’s not even a performance improvement – clearly this method wasn’t added so you can start implementing numerical root finding algorithms – the key takeaway is that the results are slightly different because a different rounding strategy has been used.

Benchmark (maxIterations) Mode Cnt Score Error Units
NM_FMA 100 thrpt 10 93.805 ± 5.174 ops/ms
NM_FMA 1000 thrpt 10 9.420 ± 1.169 ops/ms
NM_FMA 10000 thrpt 10 0.962 ± 0.044 ops/ms
NM_HandWritten 100 thrpt 10 93.457 ± 5.048 ops/ms
NM_HandWritten 1000 thrpt 10 9.274 ± 0.483 ops/ms
NM_HandWritten 10000 thrpt 10 0.928 ± 0.041 ops/ms

# Targeting SIMD in Java

Vectorised instruction execution can be targeted directly in C++ but with Java there are extra layers of abstraction to go through. Folklore aside, when does vectorisation or SIMD execution actually happen in Java? Skeptical of old wives’ tales, I investigate when SIMD instructions are actually used in Java 9, and how to disable it by programming badly.

### Building a Benchmark Harness

I use JMH to write benchmarks. For the uninitiated, it is the standard Java micro-benchmarking framework and handles various pitfalls, the most salient of which being that it ensures that your code actually executes, and ensures measurement is performed against a monotonic measure of time. Benchmarks produced without JMH are unlikely to be correct and should arouse suspicion, especially if used during a sales process.

Averages always lie, so minimisation of 99.9th percentile execution time is a better objective to have in mind. As a general performance indicator I measure throughput in operations per time unit, which is useful to correlate with JVM compilation events.

To tune performance, before even worrying about achieving SIMD execution, we need to be aware of recompilation and failures to inline. These features are enabled by the arguments below and my simple harness has a specific mode to support these diagnostics:

```-XX:+PrintCompilation
-XX:+UnlockDiagnosticVMOptions
-XX:+PrintInlining```

The proof that code is actually being vectorised is to observe the emission of AVX instructions. To prove this has happened (and if it does, it will be correlated with astonishing performance statistics), we need to see the assembly code so I run the benchmark in a mode that will print out the generated assembly via the arguments:

```-XX:+UnlockDiagnosticVMOptions
-XX:+PrintAssembly
-XX:PrintAssemblyOptions=hsdis-print-bytes
-XX:CompileCommand=print
```

However, SIMD execution only happens when SuperWord parallelism is enabled (and by default, it is) so we won’t even need to look at the assembly unless we see a clear difference when running the benchmark without the UseSuperWord option:

```-XX:-UseSuperWord
```

### What Gains Can Be Expected?

Is vectorised code execution a panacea? Assuming we can force the JVM to use SIMD, how much performance can be expected? It turns out that `java.util.Arrays.fill` can be vectorised, so we can get a taste of the difference it makes. We can observe its impact by benchmarking throughput with and without SuperWord instruction level parallelism.

``````
private void benchmark(String... jvmArgs) throws RunnerException {
Options opt = new OptionsBuilder()
.include(this.getClass().getName() + ",*")
.mode(Mode.Throughput)
.timeUnit(TimeUnit.MILLISECONDS)
.warmupIterations(5)
.measurementIterations(5)
.forks(1)
.shouldFailOnError(true)
.shouldDoGC(true)
.operationsPerInvocation(1_000_000)
.jvmArgs(jvmArgs)
.build();
new Runner(opt).run();
}

@Benchmark
public void fillArray(Blackhole bh)  {
double[] array = new double[1 << 10];
for(int i = 0; i << 1_000_000; ++i) {
Arrays.fill(array, i);
bh.consume(array);
}
}
``````

```# VM version: JDK 9-ea, VM 9-ea+166
...
# VM options: -XX:-UseSuperWord
...

Benchmark                 Mode  Cnt    Score     Error   Units
TestBenchmark.fillArray  thrpt    5  966.947 ± 596.705  ops/ms

# VM version: JDK 9-ea, VM 9-ea+166
...
# VM options: -XX:+UseSuperWord
...

Benchmark                 Mode  Cnt     Score      Error   Units
TestBenchmark.fillArray  thrpt    5  4230.802 ± 5311.001  ops/ms
```

The difference in throughput is palpable and astonishing.

### Intrinsics

Various intrinsics, such as `Arrays.fill` above, compile down to vectorised instructions. When SuperWord parallelism is enabled, they will always run faster. The JIT compiler will also target simple hand-written code, but intrinsics are a safer bet.

Addition of arrays of primitives can be vectorised automatically.

``````
public double[] add(double[] left, double[] right) {
double[] result = new double[left.length];
for(int i = 0; i < left.length && i < right.length; ++i) {
result[i] = left[i] + right[i];
}
return result;
}
``````

Benchmarking the add method, which has a small code size of 43B, the method is inlined. Throughput is doubled and we see evidence of dynamic optimisation.

```# Run progress: 25.00% complete, ETA 00:02:42
# Fork: 1 of 1
# Warmup Iteration   1: 0.372 ops/us
# Warmup Iteration   2: 0.340 ops/us
# Warmup Iteration   3: 0.432 ops/us
# Warmup Iteration   4: 0.497 ops/us
# Warmup Iteration   5: 0.499 ops/us
Iteration   1: 0.398 ops/us
Iteration   2: 0.364 ops/us
Iteration   3: 0.408 ops/us
Iteration   4: 0.544 ops/us
Iteration   5: 0.401 ops/us
```

This code is twice as fast, and our three nines is better, just by virtue of keeping the code simple.
```TestBenchmark.add:add·p0.999             sample       2.164          us/op
```

But are we getting SIMD execution? Possibly – disabling SuperWord yields noticeably worse results.
```# Run progress: 25.00% complete, ETA 00:02:22
# Fork: 1 of 1
# Warmup Iteration   1: 0.261 ops/us
# Warmup Iteration   2: 0.343 ops/us
# Warmup Iteration   3: 0.294 ops/us
# Warmup Iteration   4: 0.320 ops/us
# Warmup Iteration   5: 0.316 ops/us
Iteration   1: 0.293 ops/us
Iteration   2: 0.276 ops/us
Iteration   3: 0.304 ops/us
Iteration   4: 0.291 ops/us
Iteration   5: 0.279 ops/us
```

It’s worth inspecting the assembly to see if we can observe the emission of AVX instructions once we reenable UseSuperWord. Assembly is easier to read if inlining is disabled, this can be controlled in JMH with this annotation: `@CompilerControl(CompilerControl.Mode.DONT_INLINE)`. Applying the annotation, the assembly is printed using the appropriate JVM args:
```-XX:+UnlockDiagnosticVMOptions
-XX:+PrintAssembly
-XX:PrintAssemblyOptions=hsdis-print-bytes
-XX:CompileCommand=print
```

```0x00000154edb691e0: vmovdqu ymm0,ymmword ptr [rbp+r10*8+10h]

0x00000154edb691ee: vmovdqu ymmword ptr [r8+r10*8+10h],ymm0

0x00000154edb691f9: cmp     r10d,ebx

0x00000154edb691fc: jl      154edb691e0h
```

It’s true – this code is indeed vectorised – see the AVX instructions in bold!

#### Ruining It

Having witnessed vectorisation when adding arrays together, let’s try and break it. There are a few patterns we can apply to break our vectorised code:

• Putting an OR condition as the loop condition
• Putting a non-inlined method inside the loop
• Putting an arbitrary method as the loop condition
• Manually unrolling the loop
• Using a long as the loop variable
• Multiple exit points

The list goes on but now let’s really fuck it up. We were using `double[]` so let’s see what happens if we use a DirectByteBuffer as the backing for a homegrown vector construct instead. Instead of returning a new heap allocated array, we will write our doubles into a byte buffer, and use an offset and length to delineate the arrays. For instance, the code below will write the sum of two arrays stored in a byte buffer back into the same byte buffer. We can abstract vectors by creating small on-heap objects for each array which know the offset and length of each array in the buffer.

``````
int leftOffset, int leftLength,
int rightOffset, int rightLength,
int resultOffset) {
int resultIndex = resultOffset;
for(int l = leftOffset, r = rightOffset;
l < leftOffset + leftLength && r < rightOffset + rightLength;
l += 8, r += 8, resultIndex += 8) {
byteBuffer.putDouble(resultIndex, byteBuffer.getDouble(l) + byteBuffer.getDouble(r));
}
return resultIndex;
}
``````

Is this clever? We’ve beaten the garbage collector. It certainly feels like a clever engineering story. No, this is actually a huge regression.

```# Run progress: 0.00% complete, ETA 00:00:40
# Fork: 1 of 1
# Warmup Iteration   1: 0.156 ops/us
# Warmup Iteration   2: 0.160 ops/us
# Warmup Iteration   3: 0.198 ops/us
# Warmup Iteration   4: 0.190 ops/us
# Warmup Iteration   5: 0.272 ops/us
Iteration   1: 0.220 ops/us
Iteration   2: 0.242 ops/us
Iteration   3: 0.216 ops/us
Iteration   4: 0.248 ops/us
Iteration   5: 0.351 ops/us

TestBenchmark.byteBuffer:byteBuffer·p0.999     sample       6.552           us/op
```

The obvious indicator that this is not being vectorised is that performance does not degrade when setting
`-XX:-UseSuperWord`

And to be sure we can inspect the assembly code emitted (without disabling UseSuperWord!).
```  0x000001869216cb67: mov     edx,ecx

0x000001869216cb6c: cmp     ecx,edx

0x000001869216cb6e: jnl     1869216cb1dh

0x000001869216cb70: mov     r13d,dword ptr [r12+r11*8+8h]

0x000001869216cb75: cmp     r13d,0f8007ed8h

0x000001869216cb7c: jne     1869216d015h

0x000001869216cb82: lea     r13,[r12+r11*8]

0x000001869216cb86: movzx   eax,byte ptr [r13+29h]

0x000001869216cb8b: test    eax,eax

0x000001869216cb8d: je      1869216d015h
```

The reality is that arrays are heavily optimised, and whereas `ByteBuffer`s tend to be dreadfully slow. The bottom line is if you want to exploit instruction level parallelism, then be prepared to write code even a beginner could understand instantly. Bizarre off-heap data structures and vectorised execution? Unlikely.