Zeroing Negative Values in Arrays Efficiently

Replacing negatives with zeroes in large arrays of values is a primitive function of several complex financial risk measures, including potential future exposure (PFE) and the liquidity coverage ratio (LCR). While this is not an interesting operation by any stretch of the imagination, it is useful and there is significant benefit in its performance. This is an operation that can be computed very efficiently using the instruction VMAXPD. For Intel Xeon processors, this instruction requires half a cycle to calculate and has a latency (how long before another instruction can use its result) of four cycles. There is currently no way to trick Java into using this instruction for this simple operation, though there is a placeholder implementation on the current DoubleVector prototype in Project Panama which may do so.

C++ Intel Intrinsics

It’s possible to target instructions from different processor vendors, in my case Intel, by using intrinsic functions which expose instructions as high level functions. The code looks incredibly ugly but it works. Here is a C++ function for 256 bit ymm registers:


void zero_negatives(const double* source, double* target, const size_t length) {
  for (size_t i = 0; i + 3 < length; i += 4) {
    __m256d vector = _mm256_load_pd(source + i);
    __m256d zeroed = _mm256_max_pd(vector, _mm256_setzero_pd());
    _mm256_storeu_pd(target + i, zeroed);
  }
}

The function loads doubles into 256 bit vectors, within each vector replaces the negative values with zero, and writes them back into an array. It generates the following assembly code (which, incidentally, is less of a shit show to access than in Java):


void zero_negatives(const double* source, double* target, const size_t length) {
00007FF746EE5110  mov         qword ptr [rsp+18h],r8  
00007FF746EE5115  mov         qword ptr [rsp+10h],rdx  
00007FF746EE511A  mov         qword ptr [rsp+8],rcx  
00007FF746EE511F  push        r13  
00007FF746EE5121  push        rbp  
00007FF746EE5122  push        rdi  
00007FF746EE5123  sub         rsp,250h  
00007FF746EE512A  mov         r13,rsp  
00007FF746EE512D  lea         rbp,[rsp+20h]  
00007FF746EE5132  and         rbp,0FFFFFFFFFFFFFFE0h  
00007FF746EE5136  mov         rdi,rsp  
00007FF746EE5139  mov         ecx,94h  
00007FF746EE513E  mov         eax,0CCCCCCCCh  
00007FF746EE5143  rep stos    dword ptr [rdi]  
00007FF746EE5145  mov         rcx,qword ptr [rsp+278h]  
  for (size_t i = 0; i + 3 < length; i += 4) {
00007FF746EE514D  mov         qword ptr [rbp+8],0  
00007FF746EE5155  jmp         zero_negatives+53h (07FF746EE5163h)  
00007FF746EE5157  mov         rax,qword ptr [rbp+8]  
00007FF746EE515B  add         rax,4  
00007FF746EE515F  mov         qword ptr [rbp+8],rax  
00007FF746EE5163  mov         rax,qword ptr [rbp+8]  
00007FF746EE5167  add         rax,3  
00007FF746EE516B  cmp         rax,qword ptr [length]  
00007FF746EE5172  jae         zero_negatives+0DDh (07FF746EE51EDh)  
    __m256d vector = _mm256_load_pd(source + i);
00007FF746EE5174  mov         rax,qword ptr [source]  
00007FF746EE517B  mov         rcx,qword ptr [rbp+8]  
00007FF746EE517F  lea         rax,[rax+rcx*8]  
00007FF746EE5183  vmovupd     ymm0,ymmword ptr [rax]  
00007FF746EE5187  vmovupd     ymmword ptr [rbp+180h],ymm0  
00007FF746EE518F  vmovupd     ymm0,ymmword ptr [rbp+180h]  
00007FF746EE5197  vmovupd     ymmword ptr [rbp+40h],ymm0  
    __m256d zeroed = _mm256_max_pd(vector, _mm256_setzero_pd());
00007FF746EE519C  vxorpd      xmm0,xmm0,xmm0  
00007FF746EE51A0  vmovupd     ymmword ptr [rbp+200h],ymm0  
00007FF746EE51A8  vmovupd     ymm0,ymmword ptr [rbp+40h]  
00007FF746EE51AD  vmaxpd      ymm0,ymm0,ymmword ptr [rbp+200h]  
00007FF746EE51B5  vmovupd     ymmword ptr [rbp+1C0h],ymm0  
00007FF746EE51BD  vmovupd     ymm0,ymmword ptr [rbp+1C0h]  
00007FF746EE51C5  vmovupd     ymmword ptr [rbp+80h],ymm0  
    _mm256_storeu_pd(target + i, zeroed);
00007FF746EE51CD  mov         rax,qword ptr [target]  
00007FF746EE51D4  mov         rcx,qword ptr [rbp+8]  
00007FF746EE51D8  lea         rax,[rax+rcx*8]  
00007FF746EE51DC  vmovupd     ymm0,ymmword ptr [rbp+80h]  
00007FF746EE51E4  vmovupd     ymmword ptr [rax],ymm0  
  }
00007FF746EE51E8  jmp         zero_negatives+47h (07FF746EE5157h)  
}
00007FF746EE51ED  lea         rsp,[r13+250h]  
00007FF746EE51F4  pop         rdi  
00007FF746EE51F5  pop         rbp  
00007FF746EE51F6  pop         r13  
00007FF746EE51F8  ret    

This code is noticeably fast. I measured the throughput averaged over 1000 iterations, with an array of 10 million doubles (800MB) uniformly distributed between +/- 1E7, to quantify the throughput in GB/s and iterations/s. This code does between 4.5 and 5 iterations per second, which translates to processing approximately 4GB/s. This seems high, and since I am unaware of best practices in C++, if the measurement is flawed, I would gratefully be educated in the comments.


void benchmark() {
  const size_t length = 1E8;
  double* values = new double[length];
  fill_array(values, length);
  double* zeroed = new double[length];
  auto start = std::chrono::high_resolution_clock::now();
  int iterations = 1000;
  for (int i = 0; i < iterations; ++i) {
    zero_negatives(values, zeroed, length);
  }
  auto end = std::chrono::high_resolution_clock::now();
  auto nanos = std::chrono::duration_cast<std::chrono::nanoseconds>(end - start).count();
  double thrpt_s = (iterations * 1E9) / nanos;
  double thrpt_gbps = (thrpt_s * sizeof(double) * length) / 1E9;
  std::cout << thrpt_s << "/s" << std::endl;
  std::cout << thrpt_gbps << "GB/s" << std::endl;
  delete[] values;
  delete[] zeroed;
}

While I am sure there are various ways an expert could tweak this for performance, this code can’t get much faster unless there are 512 bit zmm registers available, in which case it would be wasteful. While the code looks virtually the same for AVX512 (just replace “256” with “512”), portability and efficiency are at odds. Handling the mess of detecting the best instruction set for the deployed architecture is the main reason for using Java in performance sensitive (but not critical) applications. But this is not the code the JVM generates.

Java Auto-Vectorisation (Play Your Cards Right)

There is currently no abstraction modelling vectorisation in Java. The only access available is if the compiler engineers implement an intrinsic, or auto-vectorisation, which will try, and sometimes succeed admirably, to translate your code to a good vector implementation. There is currently a prototype project for explicit vectorisation in Project Panama. There are a few ways to skin this cat, and it’s worth looking at the code they generate and the throughput available from each approach.

There is a choice between copying the array and zeroing out the negatives, and allocating a new array and only writing the non-negative values. There is another choice between an if statement and branchless code using Math.max. This results in the following four implementations which I measure on comparable data to the C++ benchmark (10 million doubles, normally distributed with mean zero). To be fair to the Java code, as in the C++ benchmarks, the cost of allocation is isolated by writing into an array pre-allocated once per benchmark. This penalises the approaches where the array is copied first and then zeroed wherever the value is negative. The code is online at github.


    @Benchmark
    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    public double[] BranchyCopyAndMask(ArrayWithNegatives state) {
        double[] data = state.data;
        double[] result = state.target;
        System.arraycopy(data, 0, result, 0, data.length);
        for (int i = 0; i < result.length; ++i) {
            if (result[i] < 0D) {
                result[i] = 0D;
            }
        }
        return result;
    }

    @Benchmark
    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    public double[] BranchyNewArray(ArrayWithNegatives state) {
        double[] data = state.data;
        double[] result = state.target;
        for (int i = 0; i < result.length; ++i) {
            result[i] = data[i] < 0D ? 0D : data[i];
        }
        return result;
    }

    @Benchmark
    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    public double[] NewArray(ArrayWithNegatives state) {
        double[] data = state.data;
        double[] result = state.target;
        for (int i = 0; i < result.length; ++i) {
            result[i] = Math.max(data[i], 0D);
        }
        return result;
    }

    @Benchmark
    @CompilerControl(CompilerControl.Mode.DONT_INLINE)
    public double[] CopyAndMask(ArrayWithNegatives state) {
        double[] data = state.data;
        double[] result = state.target;
        System.arraycopy(data, 0, result, 0, data.length);
        for (int i = 0; i < result.length; ++i) {
            result[i] = Math.max(result[i], 0D);
        }
        return result;
    }

None of these implementations comes close to the native code above. The best implementation performs 1.8 iterations per second which equates to processing approximately 1.4GB/s, vastly inferior to the 4GB/s achieved with Intel intrinsics. The results are below:

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit
BranchyCopyAndMask thrpt 1 10 1.314845 0.061662 ops/s
BranchyNewArray thrpt 1 10 1.802673 0.061835 ops/s
CopyAndMask thrpt 1 10 1.146630 0.018903 ops/s
NewArray thrpt 1 10 1.357020 0.116481 ops/s

As an aside, there is a very interesting observation to make, worthy of its own post: if the array consists only of positive values, the “branchy” implementations run very well, at speeds comparable to the zero_negatives (when it ran with 50% negatives). The ratio of branch hits to misses is an orthogonal explanatory variable, and the input data, while I often don’t think about it enough, is very important.

I only looked at the assembly emitted for the fastest version (BranchyNewArray) and it doesn’t look anything like zero_negatives, though it does use some vectorisation – as pointed out by Daniel Lemire in the comments, this code has probably not been vectorised and is probably using SSE2 (indeed only quad words are loaded into 128 bit registers):

  0x000002ae309c3d5c: vmovsd  xmm0,qword ptr [rdx+rax*8+10h]
  0x000002ae309c3d62: vxorpd  xmm1,xmm1,xmm1    
  0x000002ae309c3d66: vucomisd xmm0,xmm1        

I don’t really understand, and haven’t thought about, the intent of the emitted code, but it makes extensive use of the instruction VUCOMISD for comparisons with zero, which has a lower latency but lower throughput than VMAXPD. It would certainly be interesting to see how Project Panama does this. Perhaps this should just be made available as a fail-safe intrinsic like Arrays.mismatch?

Targeting SIMD in Java

Vectorised instruction execution can be targeted directly in C++ but with Java there are extra layers of abstraction to go through. Folklore aside, when does vectorisation or SIMD execution actually happen in Java? Skeptical of old wives’ tales, I investigate when SIMD instructions are actually used in Java 9, and how to disable it by programming badly.

Building a Benchmark Harness

I use JMH to write benchmarks. For the uninitiated, it is the standard Java micro-benchmarking framework and handles various pitfalls, the most salient of which being that it ensures that your code actually executes, and ensures measurement is performed against a monotonic measure of time. Benchmarks produced without JMH are unlikely to be correct and should arouse suspicion, especially if used during a sales process.

Averages always lie, so minimisation of 99.9th percentile execution time is a better objective to have in mind. As a general performance indicator I measure throughput in operations per time unit, which is useful to correlate with JVM compilation events.

To tune performance, before even worrying about achieving SIMD execution, we need to be aware of recompilation and failures to inline. These features are enabled by the arguments below and my simple harness has a specific mode to support these diagnostics:

-XX:+PrintCompilation 
-XX:+UnlockDiagnosticVMOptions 
-XX:+PrintInlining

The proof that code is actually being vectorised is to observe the emission of AVX instructions. To prove this has happened (and if it does, it will be correlated with astonishing performance statistics), we need to see the assembly code so I run the benchmark in a mode that will print out the generated assembly via the arguments:

-XX:+UnlockDiagnosticVMOptions
-XX:+PrintAssembly
-XX:PrintAssemblyOptions=hsdis-print-bytes 
-XX:CompileCommand=print

However, SIMD execution only happens when SuperWord parallelism is enabled (and by default, it is) so we won’t even need to look at the assembly unless we see a clear difference when running the benchmark without the UseSuperWord option:

-XX:-UseSuperWord

What Gains Can Be Expected?

Is vectorised code execution a panacea? Assuming we can force the JVM to use SIMD, how much performance can be expected? It turns out that java.util.Arrays.fill can be vectorised, so we can get a taste of the difference it makes. We can observe its impact by benchmarking throughput with and without SuperWord instruction level parallelism.


    private void benchmark(String... jvmArgs) throws RunnerException {
            Options opt = new OptionsBuilder()
                    .include(this.getClass().getName() + ",*")
                    .mode(Mode.Throughput)
                    .timeUnit(TimeUnit.MILLISECONDS)
                    .warmupIterations(5)
                    .measurementIterations(5)
                    .forks(1)
                    .shouldFailOnError(true)
                    .shouldDoGC(true)
                    .operationsPerInvocation(1_000_000)
                    .jvmArgs(jvmArgs)
                    .build();
            new Runner(opt).run();
    }

    @Benchmark
    public void fillArray(Blackhole bh)  {
        double[] array = new double[1 << 10];
        for(int i = 0; i << 1_000_000; ++i) {
            Arrays.fill(array, i);
            bh.consume(array);
        }
    }

# VM version: JDK 9-ea, VM 9-ea+166
...
# VM options: -XX:-UseSuperWord
...

Benchmark                 Mode  Cnt    Score     Error   Units
TestBenchmark.fillArray  thrpt    5  966.947 ± 596.705  ops/ms

# VM version: JDK 9-ea, VM 9-ea+166
...
# VM options: -XX:+UseSuperWord
...

Benchmark                 Mode  Cnt     Score      Error   Units
TestBenchmark.fillArray  thrpt    5  4230.802 ± 5311.001  ops/ms

The difference in throughput is palpable and astonishing.

Intrinsics

Various intrinsics, such as Arrays.fill above, compile down to vectorised instructions. When SuperWord parallelism is enabled, they will always run faster. The JIT compiler will also target simple hand-written code, but intrinsics are a safer bet.

Adding Arrays Together

Addition of arrays of primitives can be vectorised automatically.


    public double[] add(double[] left, double[] right) {
        double[] result = new double[left.length];
        for(int i = 0; i < left.length && i < right.length; ++i) {
            result[i] = left[i] + right[i];
        }
        return result;
    }

Benchmarking the add method, which has a small code size of 43B, the method is inlined. Throughput is doubled and we see evidence of dynamic optimisation.

# Run progress: 25.00% complete, ETA 00:02:42
# Fork: 1 of 1
# Warmup Iteration   1: 0.372 ops/us
# Warmup Iteration   2: 0.340 ops/us
# Warmup Iteration   3: 0.432 ops/us
# Warmup Iteration   4: 0.497 ops/us
# Warmup Iteration   5: 0.499 ops/us
Iteration   1: 0.398 ops/us
Iteration   2: 0.364 ops/us
Iteration   3: 0.408 ops/us
Iteration   4: 0.544 ops/us
Iteration   5: 0.401 ops/us

This code is twice as fast, and our three nines is better, just by virtue of keeping the code simple.
TestBenchmark.add:add·p0.999             sample       2.164          us/op

But are we getting SIMD execution? Possibly – disabling SuperWord yields noticeably worse results.
# Run progress: 25.00% complete, ETA 00:02:22
# Fork: 1 of 1
# Warmup Iteration   1: 0.261 ops/us
# Warmup Iteration   2: 0.343 ops/us
# Warmup Iteration   3: 0.294 ops/us
# Warmup Iteration   4: 0.320 ops/us
# Warmup Iteration   5: 0.316 ops/us
Iteration   1: 0.293 ops/us
Iteration   2: 0.276 ops/us
Iteration   3: 0.304 ops/us
Iteration   4: 0.291 ops/us
Iteration   5: 0.279 ops/us

It’s worth inspecting the assembly to see if we can observe the emission of AVX instructions once we reenable UseSuperWord. Assembly is easier to read if inlining is disabled, this can be controlled in JMH with this annotation: @CompilerControl(CompilerControl.Mode.DONT_INLINE). Applying the annotation, the assembly is printed using the appropriate JVM args:
-XX:+UnlockDiagnosticVMOptions 
-XX:+PrintAssembly
-XX:PrintAssemblyOptions=hsdis-print-bytes 
-XX:CompileCommand=print

0x00000154edb691e0: vmovdqu ymm0,ymmword ptr [rbp+r10*8+10h]

0x00000154edb691e7: vaddpd  ymm0,ymm0,ymmword ptr [rdx+r10*8+10h]

0x00000154edb691ee: vmovdqu ymmword ptr [r8+r10*8+10h],ymm0

0x00000154edb691f5: add     r10d,4h           

0x00000154edb691f9: cmp     r10d,ebx          

0x00000154edb691fc: jl      154edb691e0h      

It’s true – this code is indeed vectorised – see the AVX instructions in bold!

Ruining It

Having witnessed vectorisation when adding arrays together, let’s try and break it. There are a few patterns we can apply to break our vectorised code:

  • Putting an OR condition as the loop condition
  • Putting a non-inlined method inside the loop
  • Putting an arbitrary method as the loop condition
  • Manually unrolling the loop
  • Using a long as the loop variable
  • Multiple exit points

The list goes on but now let’s really fuck it up. We were using double[] so let’s see what happens if we use a DirectByteBuffer as the backing for a homegrown vector construct instead. Instead of returning a new heap allocated array, we will write our doubles into a byte buffer, and use an offset and length to delineate the arrays. For instance, the code below will write the sum of two arrays stored in a byte buffer back into the same byte buffer. We can abstract vectors by creating small on-heap objects for each array which know the offset and length of each array in the buffer.


public int add(ByteBuffer byteBuffer,
               int leftOffset, int leftLength,
               int rightOffset, int rightLength,
               int resultOffset) {
        int resultIndex = resultOffset;
        for(int l = leftOffset, r = rightOffset;
            l < leftOffset + leftLength && r < rightOffset + rightLength;
            l += 8, r += 8, resultIndex += 8) {
            byteBuffer.putDouble(resultIndex, byteBuffer.getDouble(l) + byteBuffer.getDouble(r));
        }
        return resultIndex;
    }

Is this clever? We’ve beaten the garbage collector. It certainly feels like a clever engineering story. No, this is actually a huge regression.

# Run progress: 0.00% complete, ETA 00:00:40
# Fork: 1 of 1
# Warmup Iteration   1: 0.156 ops/us
# Warmup Iteration   2: 0.160 ops/us
# Warmup Iteration   3: 0.198 ops/us
# Warmup Iteration   4: 0.190 ops/us
# Warmup Iteration   5: 0.272 ops/us
Iteration   1: 0.220 ops/us
Iteration   2: 0.242 ops/us
Iteration   3: 0.216 ops/us
Iteration   4: 0.248 ops/us
Iteration   5: 0.351 ops/us

TestBenchmark.byteBuffer:byteBuffer·p0.999     sample       6.552           us/op

The obvious indicator that this is not being vectorised is that performance does not degrade when setting
-XX:-UseSuperWord

And to be sure we can inspect the assembly code emitted (without disabling UseSuperWord!).
  0x000001869216cb67: mov     edx,ecx

  0x000001869216cb69: add     edx,r9d

  0x000001869216cb6c: cmp     ecx,edx

  0x000001869216cb6e: jnl     1869216cb1dh

  0x000001869216cb70: mov     r13d,dword ptr [r12+r11*8+8h]
                                                
  0x000001869216cb75: cmp     r13d,0f8007ed8h
                                      
  0x000001869216cb7c: jne     1869216d015h

  0x000001869216cb82: lea     r13,[r12+r11*8]

  0x000001869216cb86: movzx   eax,byte ptr [r13+29h]

  0x000001869216cb8b: test    eax,eax

  0x000001869216cb8d: je      1869216d015h    

The reality is that arrays are heavily optimised, and whereas ByteBuffers tend to be dreadfully slow. The bottom line is if you want to exploit instruction level parallelism, then be prepared to write code even a beginner could understand instantly. Bizarre off-heap data structures and vectorised execution? Unlikely.