How much Algebra does C2 Know? Part 2: Distributivity

In part one of this series of posts, I looked at how important associativity and independence are for fast loops. C2 seems to utilise these properties to generate unrolled and pipelined machine code for loops, achieving higher throughput even in cases where the kernel of the loop is 3x slower according to vendor advertised instruction throughputs. C2 has a weird and wonderful relationship with distributivity, and hints from the programmer can both and help hinder the generation of good quality machine code.

Viability and Correctness

Distributivity is the simple notion of factoring out brackets. Is this, in general, a viable loop rewrite strategy? This can be utilised to transform the method Scale into FactoredScale, both of which perform floating point arithmetic:


    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    @Benchmark
    public double Scale(DoubleData state) {
        double value = 0D;
        double[] data = state.data1;
        for (int i = 0; i < data.length; ++i) {
            value += 3.14159 * data[i];
        }
        return value;
    }

    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    @Benchmark
    public double FactoredScale(DoubleData state) {
        double value = 0D;
        double[] data = state.data1;
        for (int i = 0; i < data.length; ++i) {
            value += data[i];
        }
        return 3.14159 * value;
    }

Running the project at github with the argument --include .*scale.*, there may be a performance gain to be had from this rewrite, but it isn’t clear cut:

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
FactoredScale thrpt 1 10 7.011606 0.274742 ops/ms 100000
FactoredScale thrpt 1 10 0.621515 0.026853 ops/ms 1000000
Scale thrpt 1 10 6.962434 0.240180 ops/ms 100000
Scale thrpt 1 10 0.671042 0.011686 ops/ms 1000000

With the real numbers it would be completely valid, but floating point arithmetic is not associative. Joseph Darcy explains why in this deep dive on floating point semantics. Broken associativity of addition entails broken distributivity of any operation over it, so the two loops are not equivalent, and they give different outputs (e.g. 15662.513298516365 vs 15662.51329851632 for one sample input). The rewrite isn’t correct even for floating point data, so it isn’t an optimisation that could be applied in good faith, except in a very small number of cases. You have to rewrite the loop yourself and figure out if the small but inevitable differences are acceptable.

Counterintuitive Performance

Integer multiplication is distributive over addition, and we can check if C2 does this rewrite by running the same code with 32 bit integer values, for now fixing a scale factor of 10 (which seems like an innocuous value, no?)


    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    @Benchmark
    public int Scale_Int(IntData state) {
        int value = 0;
        int[] data = state.data1;
        for (int i = 0; i < data.length; ++i) {
            value += 10 * data[i];
        }
        return value;
    }

    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    @Benchmark
    public int FactoredScale_Int(IntData state) {
        int value = 0;
        int[] data = state.data1;
        for (int i = 0; i < data.length; ++i) {
            value += data[i];
        }
        return 10 * value;
    }

The results are fascinating:

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
FactoredScale_Int thrpt 1 10 28.339699 0.608075 ops/ms 100000
FactoredScale_Int thrpt 1 10 2.392579 0.506413 ops/ms 1000000
Scale_Int thrpt 1 10 33.335721 0.295334 ops/ms 100000
Scale_Int thrpt 1 10 2.838242 0.448213 ops/ms 1000000

The code is doing thousands more multiplications in less time when the multiplication is not factored out of the loop. So what the devil is going on? Inspecting the assembly for the faster loop is revealing

  0x000001c89e499400: vmovdqu ymm8,ymmword ptr [rbp+r13*4+10h]
  0x000001c89e499407: movsxd  r10,r13d       
  0x000001c89e49940a: vmovdqu ymm9,ymmword ptr [rbp+r10*4+30h]
  0x000001c89e499411: vmovdqu ymm13,ymmword ptr [rbp+r10*4+0f0h]
  0x000001c89e49941b: vmovdqu ymm12,ymmword ptr [rbp+r10*4+50h]
  0x000001c89e499422: vmovdqu ymm4,ymmword ptr [rbp+r10*4+70h]
  0x000001c89e499429: vmovdqu ymm3,ymmword ptr [rbp+r10*4+90h]
  0x000001c89e499433: vmovdqu ymm2,ymmword ptr [rbp+r10*4+0b0h]
  0x000001c89e49943d: vmovdqu ymm0,ymmword ptr [rbp+r10*4+0d0h]
  0x000001c89e499447: vpslld  ymm11,ymm8,1h  
  0x000001c89e49944d: vpslld  ymm1,ymm0,1h   
  0x000001c89e499452: vpslld  ymm0,ymm0,3h   
  0x000001c89e499457: vpaddd  ymm5,ymm0,ymm1 
  0x000001c89e49945b: vpslld  ymm0,ymm2,3h   
  0x000001c89e499460: vpslld  ymm7,ymm3,3h   
  0x000001c89e499465: vpslld  ymm10,ymm4,3h 
  0x000001c89e49946a: vpslld  ymm15,ymm12,3h
  0x000001c89e499470: vpslld  ymm14,ymm13,3h
  0x000001c89e499476: vpslld  ymm1,ymm9,3h  
  0x000001c89e49947c: vpslld  ymm2,ymm2,1h  
  0x000001c89e499481: vpaddd  ymm6,ymm0,ymm2   
  0x000001c89e499485: vpslld  ymm0,ymm3,1h     
  0x000001c89e49948a: vpaddd  ymm7,ymm7,ymm0   
  0x000001c89e49948e: vpslld  ymm0,ymm4,1h     
  0x000001c89e499493: vpaddd  ymm10,ymm10,ymm0
  0x000001c89e499497: vpslld  ymm0,ymm12,1h   
  0x000001c89e49949d: vpaddd  ymm12,ymm15,ymm0
  0x000001c89e4994a1: vpslld  ymm0,ymm13,1h   
  0x000001c89e4994a7: vpaddd  ymm4,ymm14,ymm0 
  0x000001c89e4994ab: vpslld  ymm0,ymm9,1h    
  0x000001c89e4994b1: vpaddd  ymm2,ymm1,ymm0  
  0x000001c89e4994b5: vpslld  ymm0,ymm8,3h    
  0x000001c89e4994bb: vpaddd  ymm8,ymm0,ymm11 
  0x000001c89e4994c0: vphaddd ymm0,ymm8,ymm8  
  0x000001c89e4994c5: vphaddd ymm0,ymm0,ymm3  
  0x000001c89e4994ca: vextracti128 xmm3,ymm0,1h
  0x000001c89e4994d0: vpaddd  xmm0,xmm0,xmm3   
  0x000001c89e4994d4: vmovd   xmm3,ebx         
  0x000001c89e4994d8: vpaddd  xmm3,xmm3,xmm0   
  0x000001c89e4994dc: vmovd   r10d,xmm3        
  0x000001c89e4994e1: vphaddd ymm0,ymm2,ymm2   
  0x000001c89e4994e6: vphaddd ymm0,ymm0,ymm3   
  0x000001c89e4994eb: vextracti128 xmm3,ymm0,1h
  0x000001c89e4994f1: vpaddd  xmm0,xmm0,xmm3   
  0x000001c89e4994f5: vmovd   xmm3,r10d        
  0x000001c89e4994fa: vpaddd  xmm3,xmm3,xmm0   
  0x000001c89e4994fe: vmovd   r11d,xmm3        
  0x000001c89e499503: vphaddd ymm2,ymm12,ymm12  
  0x000001c89e499508: vphaddd ymm2,ymm2,ymm0    
  0x000001c89e49950d: vextracti128 xmm0,ymm2,1h 
  0x000001c89e499513: vpaddd  xmm2,xmm2,xmm0    
  0x000001c89e499517: vmovd   xmm0,r11d         
  0x000001c89e49951c: vpaddd  xmm0,xmm0,xmm2    
  0x000001c89e499520: vmovd   r10d,xmm0         
  0x000001c89e499525: vphaddd ymm0,ymm10,ymm10  
  0x000001c89e49952a: vphaddd ymm0,ymm0,ymm3   
  0x000001c89e49952f: vextracti128 xmm3,ymm0,1h
  0x000001c89e499535: vpaddd  xmm0,xmm0,xmm3
  0x000001c89e499539: vmovd   xmm3,r10d   
  0x000001c89e49953e: vpaddd  xmm3,xmm3,xmm0   
  0x000001c89e499542: vmovd   r11d,xmm3        
  0x000001c89e499547: vphaddd ymm2,ymm7,ymm7   
  0x000001c89e49954c: vphaddd ymm2,ymm2,ymm0   
  0x000001c89e499551: vextracti128 xmm0,ymm2,1h
  0x000001c89e499557: vpaddd  xmm2,xmm2,xmm0 
  0x000001c89e49955b: vmovd   xmm0,r11d      
  0x000001c89e499560: vpaddd  xmm0,xmm0,xmm2 
  0x000001c89e499564: vmovd   r10d,xmm0      
  0x000001c89e499569: vphaddd ymm0,ymm6,ymm6   
  0x000001c89e49956e: vphaddd ymm0,ymm0,ymm3   
  0x000001c89e499573: vextracti128 xmm3,ymm0,1h
  0x000001c89e499579: vpaddd  xmm0,xmm0,xmm3   
  0x000001c89e49957d: vmovd   xmm3,r10d        
  0x000001c89e499582: vpaddd  xmm3,xmm3,xmm0   
  0x000001c89e499586: vmovd   r11d,xmm3        
  0x000001c89e49958b: vphaddd ymm2,ymm5,ymm5   
  0x000001c89e499590: vphaddd ymm2,ymm2,ymm0   
  0x000001c89e499595: vextracti128 xmm0,ymm2,1h
  0x000001c89e49959b: vpaddd  xmm2,xmm2,xmm0
  0x000001c89e49959f: vmovd   xmm0,r11d     
  0x000001c89e4995a4: vpaddd  xmm0,xmm0,xmm2
  0x000001c89e4995a8: vmovd   r10d,xmm0
  0x000001c89e4995ad: vphaddd ymm2,ymm4,ymm4 
  0x000001c89e4995b2: vphaddd ymm2,ymm2,ymm1
  0x000001c89e4995b7: vextracti128 xmm1,ymm2,1h
  0x000001c89e4995bd: vpaddd  xmm2,xmm2,xmm1
  0x000001c89e4995c1: vmovd   xmm1,r10d  
  0x000001c89e4995c6: vpaddd  xmm1,xmm1,xmm2    
  0x000001c89e4995ca: vmovd   ebx,xmm1          

The loop is aggressively unrolled, pipelined, and vectorised. Moreover, the multiplication by ten results not in a multiplication but two left shifts (see VPSLLD) and an addition. Note that x << 1 + x << 3 = x * 10 and C2 seems to know it; this rewrite can be applied because it can be proven statically that the factor is always 10. The “optimised” loop doesn’t vectorise at all (and I have no idea why not – isn’t this a bug? Yes it is.)

  0x000002bbebeda3c8: add     ebx,dword ptr [rbp+r8*4+14h]
  0x000002bbebeda3cd: add     ebx,dword ptr [rbp+r8*4+18h]
  0x000002bbebeda3d2: add     ebx,dword ptr [rbp+r8*4+1ch]
  0x000002bbebeda3d7: add     ebx,dword ptr [rbp+r8*4+20h]
  0x000002bbebeda3dc: add     ebx,dword ptr [rbp+r8*4+24h]
  0x000002bbebeda3e1: add     ebx,dword ptr [rbp+r8*4+28h]
  0x000002bbebeda3e6: add     ebx,dword ptr [rbp+r8*4+2ch]
  0x000002bbebeda3eb: add     r13d,8h           
  0x000002bbebeda3ef: cmp     r13d,r11d         
  0x000002bbebeda3f2: jl      2bbebeda3c0h      
  

This is a special case: data is usually dynamic and variable, so the loop cannot always be proven to be equivalent to a linear combination of bit shifts. The routine is compiled for all possible parameters, not just statically contrived cases like the one above, so you may never see this assembly in the wild. However, even with random factors, the slow looking loop is aggressively optimised in a way the hand “optimised” code is not:


    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    @Benchmark
    public int Scale_Int_Dynamic(ScaleState state) {
        int value = 0;
        int[] data = state.data;
        int factor = state.randomFactor();
        for (int i = 0; i < data.length; ++i) {
            value += factor * data[i];
        }
        return value;
    }

    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    @Benchmark
    public int FactoredScale_Int_Dynamic(ScaleState state) {
        int value = 0;
        int[] data = state.data;
        int factor = state.randomFactor();
        for (int i = 0; i < data.length; ++i) {
            value += data[i];
        }
        return factor * value;
    }

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
FactoredScale_Int_Dynamic thrpt 1 10 26.100439 0.340069 ops/ms 100000
FactoredScale_Int_Dynamic thrpt 1 10 1.918011 0.297925 ops/ms 1000000
Scale_Int_Dynamic thrpt 1 10 30.219809 2.977389 ops/ms 100000
Scale_Int_Dynamic thrpt 1 10 2.314159 0.378442 ops/ms 1000000

Far from seeking to exploit distributivity to reduce the number of multiplication instructions, it seems to almost embrace the extraneous operations as metadata to drive optimisations. The assembly for Scale_Int_Dynamic confirms this (it shows vectorised multiplication, not shifts, within the loop):


  0x000001f5ca2fa200: vmovdqu ymm0,ymmword ptr [r13+r14*4+10h]
  0x000001f5ca2fa207: vpmulld ymm11,ymm0,ymm2   
  0x000001f5ca2fa20c: movsxd  r10,r14d          
  0x000001f5ca2fa20f: vmovdqu ymm0,ymmword ptr [r13+r10*4+30h]
  0x000001f5ca2fa216: vmovdqu ymm1,ymmword ptr [r13+r10*4+0f0h]
  0x000001f5ca2fa220: vmovdqu ymm3,ymmword ptr [r13+r10*4+50h]
  0x000001f5ca2fa227: vmovdqu ymm7,ymmword ptr [r13+r10*4+70h]
  0x000001f5ca2fa22e: vmovdqu ymm6,ymmword ptr [r13+r10*4+90h]
  0x000001f5ca2fa238: vmovdqu ymm5,ymmword ptr [r13+r10*4+0b0h]
  0x000001f5ca2fa242: vmovdqu ymm4,ymmword ptr [r13+r10*4+0d0h]
  0x000001f5ca2fa24c: vpmulld ymm9,ymm0,ymm2    
  0x000001f5ca2fa251: vpmulld ymm4,ymm4,ymm2    
  0x000001f5ca2fa256: vpmulld ymm5,ymm5,ymm2    
  0x000001f5ca2fa25b: vpmulld ymm6,ymm6,ymm2    
  0x000001f5ca2fa260: vpmulld ymm8,ymm7,ymm2    
  0x000001f5ca2fa265: vpmulld ymm10,ymm3,ymm2   
  0x000001f5ca2fa26a: vpmulld ymm3,ymm1,ymm2    
  0x000001f5ca2fa26f: vphaddd ymm1,ymm11,ymm11  
  0x000001f5ca2fa274: vphaddd ymm1,ymm1,ymm0    
  0x000001f5ca2fa279: vextracti128 xmm0,ymm1,1h 
  0x000001f5ca2fa27f: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa283: vmovd   xmm0,ebx          
  0x000001f5ca2fa287: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa28b: vmovd   r10d,xmm0         
  0x000001f5ca2fa290: vphaddd ymm1,ymm9,ymm9    
  0x000001f5ca2fa295: vphaddd ymm1,ymm1,ymm0    
  0x000001f5ca2fa29a: vextracti128 xmm0,ymm1,1h 
  0x000001f5ca2fa2a0: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa2a4: vmovd   xmm0,r10d         
  0x000001f5ca2fa2a9: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa2ad: vmovd   r11d,xmm0         
  0x000001f5ca2fa2b2: vphaddd ymm0,ymm10,ymm10  
  0x000001f5ca2fa2b7: vphaddd ymm0,ymm0,ymm1    
  0x000001f5ca2fa2bc: vextracti128 xmm1,ymm0,1h 
  0x000001f5ca2fa2c2: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa2c6: vmovd   xmm1,r11d         
  0x000001f5ca2fa2cb: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa2cf: vmovd   r10d,xmm1         
  0x000001f5ca2fa2d4: vphaddd ymm1,ymm8,ymm8    
  0x000001f5ca2fa2d9: vphaddd ymm1,ymm1,ymm0    
  0x000001f5ca2fa2de: vextracti128 xmm0,ymm1,1h 
  0x000001f5ca2fa2e4: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa2e8: vmovd   xmm0,r10d         
  0x000001f5ca2fa2ed: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa2f1: vmovd   r11d,xmm0         
  0x000001f5ca2fa2f6: vphaddd ymm0,ymm6,ymm6    
  0x000001f5ca2fa2fb: vphaddd ymm0,ymm0,ymm1    
  0x000001f5ca2fa300: vextracti128 xmm1,ymm0,1h 
  0x000001f5ca2fa306: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa30a: vmovd   xmm1,r11d         
  0x000001f5ca2fa30f: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa313: vmovd   r10d,xmm1         
  0x000001f5ca2fa318: vphaddd ymm1,ymm5,ymm5    
  0x000001f5ca2fa31d: vphaddd ymm1,ymm1,ymm0    
  0x000001f5ca2fa322: vextracti128 xmm0,ymm1,1h 
  0x000001f5ca2fa328: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa32c: vmovd   xmm0,r10d         
  0x000001f5ca2fa331: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa335: vmovd   r11d,xmm0         
  0x000001f5ca2fa33a: vphaddd ymm0,ymm4,ymm4    
  0x000001f5ca2fa33f: vphaddd ymm0,ymm0,ymm1    
  0x000001f5ca2fa344: vextracti128 xmm1,ymm0,1h 
  0x000001f5ca2fa34a: vpaddd  xmm0,xmm0,xmm1    
  0x000001f5ca2fa34e: vmovd   xmm1,r11d         
  0x000001f5ca2fa353: vpaddd  xmm1,xmm1,xmm0    
  0x000001f5ca2fa357: vmovd   r10d,xmm1         
  0x000001f5ca2fa35c: vphaddd ymm1,ymm3,ymm3    
  0x000001f5ca2fa361: vphaddd ymm1,ymm1,ymm7    
  0x000001f5ca2fa366: vextracti128 xmm7,ymm1,1h 
  0x000001f5ca2fa36c: vpaddd  xmm1,xmm1,xmm7   
  0x000001f5ca2fa370: vmovd   xmm7,r10d        
  0x000001f5ca2fa375: vpaddd  xmm7,xmm7,xmm1   
  0x000001f5ca2fa379: vmovd   ebx,xmm7         

There are two lessons to be learnt here. The first is that what you see is not what you get. The second is about the correctness of asymptotic analysis. If hierarchical cache renders asymptotic analysis bullshit (linear time but cache friendly algorithms can, and do, outperform logarithmic algorithms with cache misses), optimising compilers render the field practically irrelevant.

Zeroing Negative Values in Arrays Efficiently

Replacing negatives with zeroes in large arrays of values is a primitive function of several complex financial risk measures, including potential future exposure (PFE) and the liquidity coverage ratio (LCR). While this is not an interesting operation by any stretch of the imagination, it is useful and there is significant benefit in its performance. This is an operation that can be computed very efficiently using the instruction VMAXPD. For Intel Xeon processors, this instruction requires half a cycle to calculate and has a latency (how long before another instruction can use its result) of four cycles. There is currently no way to trick Java into using this instruction for this simple operation, though there is a placeholder implementation on the current DoubleVector prototype in Project Panama which may do so.

C++ Intel Intrinsics

It’s possible to target instructions from different processor vendors, in my case Intel, by using intrinsic functions which expose instructions as high level functions. The code looks incredibly ugly but it works. Here is a C++ function for 256 bit ymm registers:


void zero_negatives(const double* source, double* target, const size_t length) {
  for (size_t i = 0; i + 3 < length; i += 4) {
    __m256d vector = _mm256_load_pd(source + i);
    __m256d zeroed = _mm256_max_pd(vector, _mm256_setzero_pd());
    _mm256_storeu_pd(target + i, zeroed);
  }
}

The function loads doubles into 256 bit vectors, within each vector replaces the negative values with zero, and writes them back into an array. It generates the following assembly code (which, incidentally, is less of a shit show to access than in Java):


void zero_negatives(const double* source, double* target, const size_t length) {
00007FF746EE5110  mov         qword ptr [rsp+18h],r8  
00007FF746EE5115  mov         qword ptr [rsp+10h],rdx  
00007FF746EE511A  mov         qword ptr [rsp+8],rcx  
00007FF746EE511F  push        r13  
00007FF746EE5121  push        rbp  
00007FF746EE5122  push        rdi  
00007FF746EE5123  sub         rsp,250h  
00007FF746EE512A  mov         r13,rsp  
00007FF746EE512D  lea         rbp,[rsp+20h]  
00007FF746EE5132  and         rbp,0FFFFFFFFFFFFFFE0h  
00007FF746EE5136  mov         rdi,rsp  
00007FF746EE5139  mov         ecx,94h  
00007FF746EE513E  mov         eax,0CCCCCCCCh  
00007FF746EE5143  rep stos    dword ptr [rdi]  
00007FF746EE5145  mov         rcx,qword ptr [rsp+278h]  
  for (size_t i = 0; i + 3 < length; i += 4) {
00007FF746EE514D  mov         qword ptr [rbp+8],0  
00007FF746EE5155  jmp         zero_negatives+53h (07FF746EE5163h)  
00007FF746EE5157  mov         rax,qword ptr [rbp+8]  
00007FF746EE515B  add         rax,4  
00007FF746EE515F  mov         qword ptr [rbp+8],rax  
00007FF746EE5163  mov         rax,qword ptr [rbp+8]  
00007FF746EE5167  add         rax,3  
00007FF746EE516B  cmp         rax,qword ptr [length]  
00007FF746EE5172  jae         zero_negatives+0DDh (07FF746EE51EDh)  
    __m256d vector = _mm256_load_pd(source + i);
00007FF746EE5174  mov         rax,qword ptr [source]  
00007FF746EE517B  mov         rcx,qword ptr [rbp+8]  
00007FF746EE517F  lea         rax,[rax+rcx*8]  
00007FF746EE5183  vmovupd     ymm0,ymmword ptr [rax]  
00007FF746EE5187  vmovupd     ymmword ptr [rbp+180h],ymm0  
00007FF746EE518F  vmovupd     ymm0,ymmword ptr [rbp+180h]  
00007FF746EE5197  vmovupd     ymmword ptr [rbp+40h],ymm0  
    __m256d zeroed = _mm256_max_pd(vector, _mm256_setzero_pd());
00007FF746EE519C  vxorpd      xmm0,xmm0,xmm0  
00007FF746EE51A0  vmovupd     ymmword ptr [rbp+200h],ymm0  
00007FF746EE51A8  vmovupd     ymm0,ymmword ptr [rbp+40h]  
00007FF746EE51AD  vmaxpd      ymm0,ymm0,ymmword ptr [rbp+200h]  
00007FF746EE51B5  vmovupd     ymmword ptr [rbp+1C0h],ymm0  
00007FF746EE51BD  vmovupd     ymm0,ymmword ptr [rbp+1C0h]  
00007FF746EE51C5  vmovupd     ymmword ptr [rbp+80h],ymm0  
    _mm256_storeu_pd(target + i, zeroed);
00007FF746EE51CD  mov         rax,qword ptr [target]  
00007FF746EE51D4  mov         rcx,qword ptr [rbp+8]  
00007FF746EE51D8  lea         rax,[rax+rcx*8]  
00007FF746EE51DC  vmovupd     ymm0,ymmword ptr [rbp+80h]  
00007FF746EE51E4  vmovupd     ymmword ptr [rax],ymm0  
  }
00007FF746EE51E8  jmp         zero_negatives+47h (07FF746EE5157h)  
}
00007FF746EE51ED  lea         rsp,[r13+250h]  
00007FF746EE51F4  pop         rdi  
00007FF746EE51F5  pop         rbp  
00007FF746EE51F6  pop         r13  
00007FF746EE51F8  ret    

This code is noticeably fast. I measured the throughput averaged over 1000 iterations, with an array of 10 million doubles (800MB) uniformly distributed between +/- 1E7, to quantify the throughput in GB/s and iterations/s. This code does between 4.5 and 5 iterations per second, which translates to processing approximately 4GB/s. This seems high, and since I am unaware of best practices in C++, if the measurement is flawed, I would gratefully be educated in the comments.


void benchmark() {
  const size_t length = 1E8;
  double* values = new double[length];
  fill_array(values, length);
  double* zeroed = new double[length];
  auto start = std::chrono::high_resolution_clock::now();
  int iterations = 1000;
  for (int i = 0; i < iterations; ++i) {
    zero_negatives(values, zeroed, length);
  }
  auto end = std::chrono::high_resolution_clock::now();
  auto nanos = std::chrono::duration_cast<std::chrono::nanoseconds>(end - start).count();
  double thrpt_s = (iterations * 1E9) / nanos;
  double thrpt_gbps = (thrpt_s * sizeof(double) * length) / 1E9;
  std::cout << thrpt_s << "/s" << std::endl;
  std::cout << thrpt_gbps << "GB/s" << std::endl;
  delete[] values;
  delete[] zeroed;
}

While I am sure there are various ways an expert could tweak this for performance, this code can’t get much faster unless there are 512 bit zmm registers available, in which case it would be wasteful. While the code looks virtually the same for AVX512 (just replace “256” with “512”), portability and efficiency are at odds. Handling the mess of detecting the best instruction set for the deployed architecture is the main reason for using Java in performance sensitive (but not critical) applications. But this is not the code the JVM generates.

Java Auto-Vectorisation (Play Your Cards Right)

There is currently no abstraction modelling vectorisation in Java. The only access available is if the compiler engineers implement an intrinsic, or auto-vectorisation, which will try, and sometimes succeed admirably, to translate your code to a good vector implementation. There is currently a prototype project for explicit vectorisation in Project Panama. There are a few ways to skin this cat, and it’s worth looking at the code they generate and the throughput available from each approach.

There is a choice between copying the array and zeroing out the negatives, and allocating a new array and only writing the non-negative values. There is another choice between an if statement and branchless code using Math.max. This results in the following four implementations which I measure on comparable data to the C++ benchmark (10 million doubles, normally distributed with mean zero). To be fair to the Java code, as in the C++ benchmarks, the cost of allocation is isolated by writing into an array pre-allocated once per benchmark. This penalises the approaches where the array is copied first and then zeroed wherever the value is negative. The code is online at github.


    @Benchmark
    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    public double[] BranchyCopyAndMask(ArrayWithNegatives state) {
        double[] data = state.data;
        double[] result = state.target;
        System.arraycopy(data, 0, result, 0, data.length);
        for (int i = 0; i < result.length; ++i) {
            if (result[i] < 0D) {
                result[i] = 0D;
            }
        }
        return result;
    }

    @Benchmark
    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    public double[] BranchyNewArray(ArrayWithNegatives state) {
        double[] data = state.data;
        double[] result = state.target;
        for (int i = 0; i < result.length; ++i) {
            result[i] = data[i] < 0D ? 0D : data[i];
        }
        return result;
    }

    @Benchmark
    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    public double[] NewArray(ArrayWithNegatives state) {
        double[] data = state.data;
        double[] result = state.target;
        for (int i = 0; i < result.length; ++i) {
            result[i] = Math.max(data[i], 0D);
        }
        return result;
    }

    @Benchmark
    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    public double[] CopyAndMask(ArrayWithNegatives state) {
        double[] data = state.data;
        double[] result = state.target;
        System.arraycopy(data, 0, result, 0, data.length);
        for (int i = 0; i < result.length; ++i) {
            result[i] = Math.max(result[i], 0D);
        }
        return result;
    }

None of these implementations comes close to the native code above. The best implementation performs 1.8 iterations per second which equates to processing approximately 1.4GB/s, vastly inferior to the 4GB/s achieved with Intel intrinsics. The results are below:

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit
BranchyCopyAndMask thrpt 1 10 1.314845 0.061662 ops/s
BranchyNewArray thrpt 1 10 1.802673 0.061835 ops/s
CopyAndMask thrpt 1 10 1.146630 0.018903 ops/s
NewArray thrpt 1 10 1.357020 0.116481 ops/s

As an aside, there is a very interesting observation to make, worthy of its own post: if the array consists only of positive values, the “branchy” implementations run very well, at speeds comparable to the zero_negatives (when it ran with 50% negatives). The ratio of branch hits to misses is an orthogonal explanatory variable, and the input data, while I often don’t think about it enough, is very important.

I only looked at the assembly emitted for the fastest version (BranchyNewArray) and it doesn’t look anything like zero_negatives, though it does use some vectorisation – as pointed out by Daniel Lemire in the comments, this code has probably not been vectorised and is probably using SSE2 (indeed only quad words are loaded into 128 bit registers):

  0x000002ae309c3d5c: vmovsd  xmm0,qword ptr [rdx+rax*8+10h]
  0x000002ae309c3d62: vxorpd  xmm1,xmm1,xmm1    
  0x000002ae309c3d66: vucomisd xmm0,xmm1        

I don’t really understand, and haven’t thought about, the intent of the emitted code, but it makes extensive use of the instruction VUCOMISD for comparisons with zero, which has a lower latency but lower throughput than VMAXPD. It would certainly be interesting to see how Project Panama does this. Perhaps this should just be made available as a fail-safe intrinsic like Arrays.mismatch?

Project Panama and Population Count

Project Panama introduces a new interface Vector, where the specialisation for long looks like a promising substrate for an explicitly vectorised bit set. Bit sets are useful for representing composable predicates over data sets. One obvious omission on this interface, required for an adequate implementation of a bit set, is a bit count, otherwise known as population count. Perhaps this is because the vector API aims to generalise across primitive types, whereas population count is only meaningful for integral types. Even so, if Vector can be interpreted as a wider integer, then it would be consistent to add this to the interface. If the method existed, what possible implementation could it have?

In x86, the population count of a 64 bit register is computed by the POPCNT instruction, which is exposed in Java as an intrinsic in Long.bitCount. There is no SIMD equivalent in any extension set until VPOPCNTD/VPOPCNTQ in AVX-512. Very few processors (at the time of writing) support AVX-512, and only the Knights Mill processor supports this extension; there are not even Intel intrinsics exposing these instructions yet.

The algorithm for vectorised population count adopted by the clang compiler is outlined in this paper, which develops on an algorithm designed for 128 bit registers and SSE instructions, presented by Wojciech Muła on his blog in 2008. This approach is shown in the paper to outperform scalar code using POPCNT and 64 bit registers, almost doubling throughput when 256 bit ymm registers are available. The core algorithm (taken from figure 10 in the paper) returns a vector of four 64 bit counts, which can then be added together in a variety of ways to form a population count, proceeds as follows:


// The Muła Function
__m256i count(__m256i v) {
    __m256i lookup = _mm256_setr_epi8(
                 0, 1, 1, 2, 1, 2, 2, 3, 
                 1, 2, 2, 3, 2, 3, 3, 4,
                 0, 1, 1, 2, 1, 2, 2, 3,
                 1, 2, 2, 3, 2, 3, 3, 4);
    __m256i low_mask = _mm256_set1_epi8(0x0f);
    __m256i lo = _mm256_and_si256(v, low_mask);
    __m256i hi = _mm256_and_si256(_mm256_srli_epi32(v, 4), low_mask);
    __m256i popcnt1 = _mm256_shuffle_epi8(lookup, lo);
    __m256i popcnt2 = _mm256_shuffle_epi8(lookup, hi);
    __m256i total = _mm256_add_epi8(popcnt1, popcnt2);
    return _mm256_sad_epu8(total, _mm256_setzero_si256());
}

If you are struggling to read the code above, you are not alone. I haven’t programmed in C++ for several years – it’s amazing how nice the names in civilised languages like Java and python (and even bash) are compared to the black magic above. There is some logic to the naming though: read page 5 of the manual. You can also read an accessible description of some of the functions used in this blog post.

The basic idea starts from storing the population counts for each possible byte value in a lookup table, which can be looked up using bit level parallelism and ultimately added up. For efficiency’s sake, instead of bytes, 4 bit nibbles are used, which is why you only see numbers 0-4 in the lookup table. Various, occasionally obscure, optimisations are applied resulting in the magic numbers at the the top of the function. A large chunk of the paper is devoted to their derivation: if you are interested, go and read the paper – I could not understand the intent of the code at all until reading the paper twice, especially section 2.

The points I find interesting are:

  • This algorithm exists
  • It uses instructions all modern commodity processors have
  • It is fast
  • It is in use

Could this be implemented in the JVM as an intrinsic and exposed on Vector?

Explicit Intent and Even Faster Hash Codes

I wrote a post recently about how disappointed I was that the optimiser couldn’t outsmart some clever Java code for computing hash codes. Well, here’s a faster hash code along the same lines.

The hash code implemented in Arrays.hashCode is a polynomial hash, it applies to any data type with a positional interpretation. It takes the general form \sum_{i=0}^{n}x_{i}31^{n - i} where x_0 = 1. In other words, it’s a dot product of the elements of the array and some powers of 31. Daniel Lemire’s implementation makes it explicit to the optimiser, in a way it won’t otherwise infer, that this operation is data parallel. If it’s really just a dot product it can be made even more obvious at the cost of a loss of flexibility.

Imagine you are processing fixed or limited length strings (VARCHAR(255) or an URL) or coordinates of a space of fixed dimension. Then you could pre-compute the coefficients in an array and write the hash code explicitly as a dot product. Java 9 uses AVX instructions for dot products, so it should be very fast.


public class FixedLengthHashCode {

    private final int[] coefficients;

    public FixedLengthHashCode(int maxLength) {
        this.coefficients = new int[maxLength + 1];
        coefficients[maxLength] = 1;
        for (int i = maxLength - 1; i >= 0; --i) {
            coefficients[i] = 31 * coefficients[i + 1];
        }
    }

    public int hashCode(int[] value) {
        int result = coefficients[0];
        for (int i = 0; i < value.length && i < coefficients.length - 1; ++i) {
            result += coefficients[i + 1] * value[i];
        }
        return result;
    }
}

This is really explicit, unambiguously parallelisable, and the results are remarkable.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
HashCode.BuiltIn thrpt 1 10 10.323026 0.223614 ops/us 100
HashCode.BuiltIn thrpt 1 10 0.959246 0.038900 ops/us 1000
HashCode.BuiltIn thrpt 1 10 0.096005 0.001836 ops/us 10000
HashCode.FixedLength thrpt 1 10 20.186800 0.297590 ops/us 100
HashCode.FixedLength thrpt 1 10 2.314187 0.082867 ops/us 1000
HashCode.FixedLength thrpt 1 10 0.227090 0.005377 ops/us 10000
HashCode.Unrolled thrpt 1 10 13.250821 0.752609 ops/us 100
HashCode.Unrolled thrpt 1 10 1.503368 0.058200 ops/us 1000
HashCode.Unrolled thrpt 1 10 0.152179 0.003541 ops/us 10000

Modifying the algorithm slightly to support limited variable length arrays degrades performance slightly, but there are seemingly equivalent implementations which do much worse.


public class FixedLengthHashCode {

    private final int[] coefficients;

    public FixedLengthHashCode(int maxLength) {
        this.coefficients = new int[maxLength + 1];
        coefficients[0] = 1;
        for (int i = 1; i >= maxLength; ++i) {
            coefficients[i] = 31 * coefficients[i - 1];
        }
    }

    public int hashCode(int[] value) {
        final int max = value.length;
        int result = coefficients[max];
        for (int i = 0; i < value.length && i < coefficients.length - 1; ++i) {
            result += coefficients[max - i - 1] * value[i];
        }
        return result;
    }
}

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
FixedLength thrpt 1 10 19.172574 0.742637 ops/us 100
FixedLength thrpt 1 10 2.233006 0.115285 ops/us 1000
FixedLength thrpt 1 10 0.227451 0.012231 ops/us 10000

The benchmark code is at github.

New Methods in Java 9: Math.fma and Arrays.mismatch

There are two noteworthy new methods in Java 9: Arrays.mismatch and Math.fma.

Arrays.mismatch

This method takes two primitive arrays, and returns the index of the first differing values. This effectively computes the longest common prefix of the two arrays. This is really quite useful, mostly for text processing but also for Bioinformatics (protein sequencing and so on, much more interesting than the sort of thing I work on). Having worked extensively with Apache HBase (where a vast majority of the API involves manipulating byte arrays) I can think of lots of less interesting use cases for this method.

Looking carefully, you can see that the method calls into the internal ArraysSupport utility class, which will try to perform a vectorised mismatch (an intrinsic candidate). Since this will use AVX instructions, this is very fast; much faster than a handwritten loop.

Let’s measure the boost versus a handwritten loop, testing across a range of common prefices and array lengths of byte[].


    @Benchmark
    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    public void Mismatch_Intrinsic(BytePrefixData data, Blackhole bh) {
        bh.consume(Arrays.mismatch(data.data1, data.data2));
    }

    @Benchmark
    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    public void Mismatch_Handwritten(BytePrefixData data, Blackhole bh) {
        byte[] data1 = data.data1;
        byte[] data2 = data.data2;
        int length = Math.min(data1.length, data2.length);
        int mismatch = -1;
        for (int i = 0; i < length; ++i) {
            if (data1[i] != data2[i]) {
                mismatch = i;
                break;
            }
        }
        bh.consume(mismatch);
    }

The results speak for themselves. Irritatingly, there is some duplication in output because I haven’t figured out how to make JMH use a subset of the Cartesian product of its parameters.

Benchmark (prefix) (size) Mode Cnt Score Error Units
Mismatch_Handwritten 10 100 thrpt 10 22.360 ± 0.938 ops/us
Mismatch_Handwritten 10 1000 thrpt 10 2.459 ± 0.256 ops/us
Mismatch_Handwritten 10 10000 thrpt 10 0.255 ± 0.009 ops/us
Mismatch_Handwritten 100 100 thrpt 10 22.763 ± 0.869 ops/us
Mismatch_Handwritten 100 1000 thrpt 10 2.690 ± 0.044 ops/us
Mismatch_Handwritten 100 10000 thrpt 10 0.273 ± 0.008 ops/us
Mismatch_Handwritten 1000 100 thrpt 10 24.970 ± 0.713 ops/us
Mismatch_Handwritten 1000 1000 thrpt 10 2.791 ± 0.066 ops/us
Mismatch_Handwritten 1000 10000 thrpt 10 0.281 ± 0.007 ops/us
Mismatch_Intrinsic 10 100 thrpt 10 89.169 ± 2.759 ops/us
Mismatch_Intrinsic 10 1000 thrpt 10 26.995 ± 0.501 ops/us
Mismatch_Intrinsic 10 10000 thrpt 10 3.553 ± 0.065 ops/us
Mismatch_Intrinsic 100 100 thrpt 10 83.037 ± 5.590 ops/us
Mismatch_Intrinsic 100 1000 thrpt 10 26.249 ± 0.714 ops/us
Mismatch_Intrinsic 100 10000 thrpt 10 3.523 ± 0.122 ops/us
Mismatch_Intrinsic 1000 100 thrpt 10 87.921 ± 6.566 ops/us
Mismatch_Intrinsic 1000 1000 thrpt 10 25.812 ± 0.442 ops/us
Mismatch_Intrinsic 1000 10000 thrpt 10 4.177 ± 0.059 ops/us

Why is there such a big difference? Look at how the score decreases as a function of array length, even when the common prefix, and therefore the number of comparisons required, is small: clearly the performance of this algorithm depends on the efficiency of memory access. Arrays.mismatch optimises this, reading qwords of the array into SIMD registers. Working one long at a time, it is possible to compute the XOR in a single instruction to determine if it’s even necessary to look at each byte.

java/util/ArraysSupport.vectorizedMismatch(Ljava/lang/Object;JLjava/lang/Object;JII)I  [0x000002bd9215a820, 0x000002bd9215aa78]  600 bytes
Argument 0 is unknown.RIP: 0x2bd9215a820 Code size: 0x00000258
[Entry Point]
[Verified Entry Point]
[Constants]
  # {method} {0x000002bda79cbf68} 'vectorizedMismatch' '(Ljava/lang/Object;JLjava/lang/Object;JII)I' in 'java/util/ArraysSupport'
  # parm0:    rdx:rdx   = 'java/lang/Object'
  # parm1:    r8:r8     = long
  # parm2:    r9:r9     = 'java/lang/Object'
  # parm3:    rdi:rdi   = long
  # parm4:    rsi       = int
  # parm5:    rcx       = int
  #           [sp+0x60]  (sp of caller)
  0x000002bd9215a820: mov     dword ptr [rsp+0ffffffffffff9000h],eax
                                                ;...89
                                                ;...84
                                                ;...24
                                                ;...00
                                                ;...90
                                                ;...ff
                                                ;...ff

  0x000002bd9215a827: push    rbp               ;...55

  0x000002bd9215a828: sub     rsp,50h           ;...48
                                                ;...83
                                                ;...ec
                                                ;...50
                                                ;*synchronization entry
                                                ; - java.util.ArraysSupport::vectorizedMismatch@-1 (line 120)

  0x000002bd9215a82c: mov     r10,rdi           ;...4c
                                                ;...8b
                                                ;...d7

  0x000002bd9215a82f: vmovq   xmm2,r9           ;...c4
                                                ;...c1
                                                ;...f9
                                                ;...6e
                                                ;...d1

  0x000002bd9215a834: vmovq   xmm1,rdx          ;...c4
                                                ;...e1
                                                ;...f9
                                                ;...6e
                                                ;...ca

  0x000002bd9215a839: mov     r14d,ecx          ;...44
                                                ;...8b
                                                ;...f1

  0x000002bd9215a83c: vmovd   xmm0,esi          ;...c5
                                                ;...f9
                                                ;...6e
                                                ;...c6

  0x000002bd9215a840: mov     r9d,3h            ;...41
                                                ;...b9
                                                ;...03
                                                ;...00
                                                ;...00
                                                ;...00

  0x000002bd9215a846: sub     r9d,ecx           ;...44
                                                ;...2b
                                                ;...c9
                                                ;*isub {reexecute=0 rethrow=0 return_oop=0}
                                                ; - java.util.ArraysSupport::vectorizedMismatch@5 (line 120)

  0x000002bd9215a849: mov     edx,esi           ;...8b
                                                ;...d6

  0x000002bd9215a84b: mov     ecx,r9d           ;...41
                                                ;...8b
                                                ;...c9

  0x000002bd9215a84e: sar     edx,cl            ;...d3
                                                ;...fa
                                                ;*ishr {reexecute=0 rethrow=0 return_oop=0}
                                                ; - java.util.ArraysSupport::vectorizedMismatch@17 (line 122)

  0x000002bd9215a850: mov     eax,1h            ;...b8
                                                ;...01
                                                ;...00
                                                ;...00
                                                ;...00

  0x000002bd9215a855: xor     edi,edi           ;...33
                                                ;...ff

  0x000002bd9215a857: test    edx,edx           ;...85
                                                ;...d2

  0x000002bd9215a859: jle     2bd9215a97ah      ;...0f
                                                ;...8e
                                                ;...1b
                                                ;...01
                                                ;...00
                                                ;...00

The code for this benchmark is at github.

Math.fma

In comparison to users of some languages, Java programmers are lackadaisical about floating point errors. It’s a good job that historically Java hasn’t been considered suitable for the implementation of numerical algorithms. But all of a sudden there is a revolution of data science on the JVM, albeit mostly driven by the Scala community, with JVM implementations of structures like recurrent neural networks abounding. It matters less for machine learning than root finding, but how accurate can these implementations be without JVM level support for minimising the propagation floating point errors? With Math.fma this is improving, by allowing two common operations to be performed before rounding.

Math.fma fuses a multiplication and an addition into a single floating point operation to compute expressions like ab + c. This has two key benefits:

  1. There’s only one operation, and only one rounding error
  2. This is explicitly supported in AVX2 by the VFMADD* instructions

Newton’s Method

To investigate any superior suppression of floating point errors, I use a toy implementation of Newton’s method to compute the root of a quadratic equation, which any teenager could calculate analytically (the error is easy to quantify).

I compare these two implementations for 4x^2 - 12x + 9 (there is a repeated root at 1.5) to get an idea for the error (defined by |1.5 - x_n|) after a large number of iterations.

I implemented this using FMA:


public class NewtonsMethodFMA {

    private final double[] coefficients;

    public NewtonsMethodFMA(double[] coefficients) {
        this.coefficients = coefficients;
    }

    public double evaluateF(double x) {
        double f = 0D;
        int power = coefficients.length - 1;
        for (int i = 0; i < coefficients.length; ++i) {
            f = Math.fma(coefficients[i], Math.pow(x, power--), f);
        }
        return f;
    }

    public double evaluateDF(double x) {
        double df = 0D;
        int power = coefficients.length - 2;
        for (int i = 0; i < coefficients.length - 1; ++i) {
            df = Math.fma((power + 1) * coefficients[i],  Math.pow(x, power--), df);
        }
        return df;
    }

    public double solve(double initialEstimate, int maxIterations) {
        double result = initialEstimate;
        for (int i = 0; i < maxIterations; ++i) {
            result -= evaluateF(result)/evaluateDF(result);
        }
        return result;
    }
}

And an implementation with normal operations:



public class NewtonsMethod {

    private final double[] coefficients;

    public NewtonsMethod(double[] coefficients) {
        this.coefficients = coefficients;
    }

    public double evaluateF(double x) {
        double f = 0D;
        int power = coefficients.length - 1;
        for (int i = 0; i < coefficients.length; ++i) {
            f += coefficients[i] * Math.pow(x, power--);
        }
        return f;
    }

    public double evaluateDF(double x) {
        double df = 0D;
        int power = coefficients.length - 2;
        for (int i = 0; i < coefficients.length - 1; ++i) {
            df += (power + 1) * coefficients[i] * Math.pow(x, power--);
        }
        return df;
    }

    public double solve(double initialEstimate, int maxIterations) {
        double result = initialEstimate;
        for (int i = 0; i < maxIterations; ++i) {
            result -= evaluateF(result)/evaluateDF(result);
        }
        return result;
    }
}

When I run this code for 1000 iterations, the FMA version results in 1.5000000083575202, whereas the vanilla version results in 1.500000017233207. It’s completely unscientific, but seems plausible and confirms my prejudice so… In fact, it’s not that simple, and over a range of initial values, there is only a very small difference in FMA’s favour. There’s not even a performance improvement – clearly this method wasn’t added so you can start implementing numerical root finding algorithms – the key takeaway is that the results are slightly different because a different rounding strategy has been used.

Benchmark (maxIterations) Mode Cnt Score Error Units
NM_FMA 100 thrpt 10 93.805 ± 5.174 ops/ms
NM_FMA 1000 thrpt 10 9.420 ± 1.169 ops/ms
NM_FMA 10000 thrpt 10 0.962 ± 0.044 ops/ms
NM_HandWritten 100 thrpt 10 93.457 ± 5.048 ops/ms
NM_HandWritten 1000 thrpt 10 9.274 ± 0.483 ops/ms
NM_HandWritten 10000 thrpt 10 0.928 ± 0.041 ops/ms