#### Matrix Multiplication Revisited

In a recent post, I took a look at matrix multiplication in pure Java, to see if it can go faster than reported in SIMD Intrinsics on Managed Language Runtimes. I found faster implementations than the paper’s benchmarks implied was possible. Nevertheless, I found that there were some limitations in Hotspot’s autovectoriser that I didn’t expect to see, even in JDK10. Were these limitations somehow fundamental, or can other compilers do better with essentially the same input?

I took a look at the code generated by GCC’s autovectoriser to see what’s possible in C/C++ without resorting to complicated intrinsics. For a bit of fun, I went over to the dark side to squeeze out some a lot of extra performance, which gave inspiration to a simple vectorised Java implementation which can maintain intensity as matrix size increases.

### Background

The paper reports a 5x improvement in matrix multiplication throughput as a result of using LMS generated intrinsics. Using GCC as LMS’s backend, I easily reproduced very good throughput, but I found two Java implementations better than the paper’s baseline. The best performing Java implementation proposed in the paper was `blocked`. This post is not about the LMS benchmarks, but this code is this post’s inspiration.

``````
public void blocked(float[] a, float[] b, float[] c, int n) {
int BLOCK_SIZE = 8;
// GOOD: attempts to do as much work in submatrices
// GOOD: tries to avoid bringing data through cache multiple times
for (int kk = 0; kk < n; kk += BLOCK_SIZE) {
for (int jj = 0; jj < n; jj += BLOCK_SIZE) {
for (int i = 0; i < n; i++) {
for (int j = jj; j < jj + BLOCK_SIZE; ++j) {
// BAD: manual unrolling, bypasses optimisations
float sum = c[i * n + j];
for (int k = kk; k < kk + BLOCK_SIZE; ++k) {
// BAD: horizontal sums are inefficient
sum += a[i * n + k] * b[k * n + j];
}
c[i * n + j] = sum;
}
}
}
}
}
``````

I proposed the following implementation for improved cache efficiency and expected it to vectorise automatically.

``````public void fast(float[] a, float[] b, float[] c, int n) {
// GOOD: 2x faster than "blocked" - why?
int in = 0;
for (int i = 0; i < n; ++i) {
int kn = 0;
for (int k = 0; k < n; ++k) {
float aik = a[in + k];
// MIXED: passes over c[in:in+n] multiple times per k-value, "free" if n is small
// MIXED: reloads b[kn:kn+n] repeatedly for each i, bad if n is large, "free" if n is small
// BAD: doesn't vectorise but should
for (int j = 0; j < n; ++j) {
c[in + j] += aik * b[kn + j]; // sequential writes and reads, cache and vectoriser friendly
}
kn += n;
}
in += n;
}
}
``````

My code actually doesn’t vectorise, even in JDK10, which really surprised me because the inner loop vectorises if the offsets are always zero. In any case, there is a simple hack involving the use of buffers, which unfortunately thrashes the cache, but narrows the field significantly.

``````
public void fastBuffered(float[] a, float[] b, float[] c, int n) {
float[] bBuffer = new float[n];
float[] cBuffer = new float[n];
int in = 0;
for (int i = 0; i < n; ++i) {
int kn = 0;
for (int k = 0; k < n; ++k) {
float aik = a[in + k];
System.arraycopy(b, kn, bBuffer, 0, n);
saxpy(n, aik, bBuffer, cBuffer);
kn += n;
}
System.arraycopy(cBuffer, 0, c, in, n);
Arrays.fill(cBuffer, 0f);
in += n;
}
}
``````

I left the problem looking like this, with the “JDKX vectorised” lines using the algorithm above with a buffer hack:

### GCC Autovectorisation

The Java code is very easy to translate into C/C++. Before looking at performance I want to get an idea of what GCC’s autovectoriser does. I want to see the code generated at GCC optimisation level 3, with unrolled loops, FMA, and AVX2, which can be seen as follows:

`g++ -mavx2 -mfma -march=native -funroll-loops -O3 -S mmul.cpp`

The generated assembly code can be seen in full context here. Let’s look at the `mmul_saxpy` routine first:

``````static void mmul_saxpy(const int n, const float* left, const float* right, float* result) {
int in = 0;
for (int i = 0; i < n; ++i) {
int kn = 0;
for (int k = 0; k < n; ++k) {
float aik = left[in + k];
for (int j = 0; j < n; ++j) {
result[in + j] += aik * right[kn + j];
}
kn += n;
}
in += n;
}
}
``````

This routine uses SIMD instructions, which means in principle any other compiler could do this too. The inner loop has been unrolled, but this is only by virtue of the `-funroll-loops` flag. C2 does this sort of thing as standard, but only for hot loops. In general you might not want to unroll loops because of the impact on code size, and it’s great that a JIT compiler can decide only to do this when it’s profitable.

``````.L9:
vmovups  (%rdx,%rax), %ymm4
vmovaps  %ymm4, (%r11,%rax)
vmovups  32(%rdx,%rax), %ymm5
vmovaps  %ymm5, 32(%r11,%rax)
vmovups  64(%rdx,%rax), %ymm1
vmovaps  %ymm1, 64(%r11,%rax)
vmovups  96(%rdx,%rax), %ymm2
vmovaps  %ymm2, 96(%r11,%rax)
vmovups  128(%rdx,%rax), %ymm4
vmovaps  %ymm4, 128(%r11,%rax)
vmovups  160(%rdx,%rax), %ymm5
vmovaps  %ymm5, 160(%r11,%rax)
vmovups  192(%rdx,%rax), %ymm1
vmovaps  %ymm1, 192(%r11,%rax)
vmovups  224(%rdx,%rax), %ymm2
vmovaps  %ymm2, 224(%r11,%rax)
cmpl  %r10d, 24(%rsp)
ja  .L9
``````

The `mmul_blocked` routine is compiled to quite convoluted assembly. It has a huge problem with the expression `right[k * n + j]`, which requires a gather and is almost guaranteed to create 8 cache misses per block for large matrices. Moreover, this inefficiency gets much worse with problem size.

``````static void mmul_blocked(const int n, const float* left, const float* right, float* result) {
int BLOCK_SIZE = 8;
for (int kk = 0; kk < n; kk += BLOCK_SIZE) {
for (int jj = 0; jj < n; jj += BLOCK_SIZE) {
for (int i = 0; i < n; i++) {
for (int j = jj; j < jj + BLOCK_SIZE; ++j) {
float sum = result[i * n + j];
for (int k = kk; k < kk + BLOCK_SIZE; ++k) {
sum += left[i * n + k] * right[k * n + j]; // second read here requires a gather
}
result[i * n + j] = sum;
}
}
}
}
}
``````

This compiles to assembly with the unrolled vectorised loop below:

``````.L114:
cmpq  %r10, %r9
setbe  %cl
cmpq  56(%rsp), %r8
setnb  %dl
orl  %ecx, %edx
cmpq  %r14, %r9
setbe  %cl
cmpq  64(%rsp), %r8
setnb  %r15b
orl  %ecx, %r15d
andl  %edx, %r15d
cmpq  %r11, %r9
setbe  %cl
cmpq  48(%rsp), %r8
setnb  %dl
orl  %ecx, %edx
andl  %r15d, %edx
cmpq  %rbx, %r9
setbe  %cl
cmpq  40(%rsp), %r8
setnb  %r15b
orl  %ecx, %r15d
andl  %edx, %r15d
cmpq  %rsi, %r9
setbe  %cl
cmpq  32(%rsp), %r8
setnb  %dl
orl  %ecx, %edx
andl  %r15d, %edx
cmpq  %rdi, %r9
setbe  %cl
cmpq  24(%rsp), %r8
setnb  %r15b
orl  %ecx, %r15d
andl  %edx, %r15d
cmpq  %rbp, %r9
setbe  %cl
cmpq  16(%rsp), %r8
setnb  %dl
orl  %ecx, %edx
andl  %r15d, %edx
cmpq  %r12, %r9
setbe  %cl
cmpq  8(%rsp), %r8
setnb  %r15b
orl  %r15d, %ecx
testb  %cl, %dl
je  .L111
leaq  32(%rax), %rdx
cmpq  %rdx, %r8
setnb  %cl
cmpq  %rax, %r9
setbe  %r15b
orb  %r15b, %cl
je  .L111
vmovups  (%r8), %ymm2
vmovups  %ymm0, (%r8)
``````

### Benchmarks

I implemented a suite of benchmarks to compare the implementations. You can run them, but since they measure throughput and intensity averaged over hundreds of iterations per matrix size, the full run will take several hours.

`g++ -mavx2 -mfma -march=native -funroll-loops -O3 mmul.cpp -o mmul.exe && ./mmul.exe > results.csv`

The `saxpy` routine wins, with `blocked` fading fast after a middling start.

name size throughput (ops/s) flops/cycle
blocked 64 22770.2 4.5916
saxpy 64 25638.4 5.16997
blocked 128 2736.9 4.41515
saxpy 128 4108.52 6.62783
blocked 192 788.132 4.29101
saxpy 192 1262.45 6.87346
blocked 256 291.728 3.76492
saxpy 256 521.515 6.73044
blocked 320 147.979 3.72997
saxpy 320 244.528 6.16362
blocked 384 76.986 3.35322
saxpy 384 150.441 6.55264
blocked 448 50.4686 3.4907
saxpy 448 95.0752 6.57594
blocked 512 30.0085 3.09821
saxpy 512 65.1842 6.72991
blocked 576 22.8301 3.35608
saxpy 576 44.871 6.59614
blocked 640 15.5007 3.12571
saxpy 640 32.3709 6.52757
blocked 704 12.2478 3.28726
saxpy 704 25.3047 6.79166
blocked 768 8.69277 3.02899
saxpy 768 19.8011 6.8997
blocked 832 7.29356 3.23122
saxpy 832 15.3437 6.7976
blocked 896 4.95207 2.74011
saxpy 896 11.9611 6.61836
blocked 960 3.4467 2.34571
saxpy 960 9.25535 6.29888
blocked 1024 2.02289 1.67082
saxpy 1024 6.87039 5.67463

With GCC autovectorisation, `saxpy` performs well, maintaining intensity as size increases, albeit well below the theoretical capacity. It would be nice if similar code could be JIT compiled in Java.

### Intel Intrinsics

To understand the problem space a bit better, I find out how fast matrix multiplication can get without domain expertise by handcrafting an algorithm with intrinsics. My laptop’s Skylake chip (turbo boost and hyperthreading disabled) is capable of 32 SP flops per cycle per core – Java and the LMS implementation previously fell a long way short of that. It was difficult getting beyond 4f/c with Java, and LMS peaked at almost 6f/c before quickly tailing off. GCC autovectorisation achieved and maintained 7f/c.

To start, I’ll take full advantage of the facility to align the matrices on 64 byte intervals, since I have 64B cache lines, though this might just be voodoo. I take the `saxpy` routine and replace its kernel with intrinsics. Because of the `-funroll-loops` option, this will get unrolled without effort.

``````static void mmul_saxpy_avx(const int n, const float* left, const float* right, float* result) {
int in = 0;
for (int i = 0; i < n; ++i) {
int kn = 0;
for (int k = 0; k < n; ++k) {
__m256 aik = _mm256_set1_ps(left[in + k]);
int j = 0;
for (; j < n; j += 8) {
}
for (; j < n; ++j) {
result[in + j] += left[in + k] * right[kn + j];
}
kn += n;
}
in += n;
}
}
``````

This code is actually not a lot faster, if at all, than the basic `saxpy` above: a lot of aggressive optimisations have already been applied.

### Combining Blocked and SAXPY

What makes `blocked` so poor is the gather and the cache miss, not the concept of blocking itself. A limiting factor for `saxpy` performance is that the ratio of loads to floating point operations is too high. With this in mind, I tried combining the blocking idea with `saxpy`, by implementing `saxpy` multiplications for smaller sub-matrices. This results in a different algorithm with fewer loads per floating point operation, and the inner two loops are swapped. It avoids the gather and the cache miss in `blocked`. Because the matrices are in row major format, I make the width of the blocks much larger than the height. Also, different heights and widths make sense depending on the size of the matrix, so I choose them dynamically. The design constraints are to avoid gathers and horizontal reduction.

``````static void mmul_tiled_avx(const int n, const float *left, const float *right, float *result) {
const int block_width = n >= 256 ? 512 : 256;
const int block_height = n >= 512 ? 8 : n >= 256 ? 16 : 32;
for (int row_offset = 0; row_offset < n; row_offset += block_height) {
for (int column_offset = 0; column_offset < n; column_offset += block_width) {
for (int i = 0; i < n; ++i) {
for (int j = column_offset; j < column_offset + block_width && j < n; j += 8) {
__m256 sum = _mm256_load_ps(result + i * n + j);
for (int k = row_offset; k < row_offset + block_height && k < n; ++k) {
sum = _mm256_fmadd_ps(_mm256_set1_ps(left[i * n + k]), _mm256_load_ps(right + k * n + j), sum);
}
_mm256_store_ps(result + i * n + j, sum);
}
}
}
}
}
``````

You will see in the benchmark results that this routine really doesn’t do very well compared to `saxpy`. Finally, I unroll it, which is profitable despite setting `-funroll-loops` because there is slightly more to this than an unroll. This is a sequence of vertical reductions which have no data dependencies.

``````static void mmul_tiled_avx_unrolled(const int n, const float *left, const float *right, float *result) {
const int block_width = n >= 256 ? 512 : 256;
const int block_height = n >= 512 ? 8 : n >= 256 ? 16 : 32;
for (int column_offset = 0; column_offset < n; column_offset += block_width) {
for (int row_offset = 0; row_offset < n; row_offset += block_height) {
for (int i = 0; i < n; ++i) {
for (int j = column_offset; j < column_offset + block_width && j < n; j += 64) {
__m256 sum1 = _mm256_load_ps(result + i * n + j);
__m256 sum2 = _mm256_load_ps(result + i * n + j + 8);
__m256 sum3 = _mm256_load_ps(result + i * n + j + 16);
__m256 sum4 = _mm256_load_ps(result + i * n + j + 24);
__m256 sum5 = _mm256_load_ps(result + i * n + j + 32);
__m256 sum6 = _mm256_load_ps(result + i * n + j + 40);
__m256 sum7 = _mm256_load_ps(result + i * n + j + 48);
__m256 sum8 = _mm256_load_ps(result + i * n + j + 56);
for (int k = row_offset; k < row_offset + block_height && k < n; ++k) {
__m256 multiplier = _mm256_set1_ps(left[i * n + k]);
sum2 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 8), sum2);
sum3 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 16), sum3);
sum4 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 24), sum4);
sum5 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 32), sum5);
sum6 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 40), sum6);
sum7 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 48), sum7);
sum8 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 56), sum8);
}
_mm256_store_ps(result + i * n + j, sum1);
_mm256_store_ps(result + i * n + j + 8, sum2);
_mm256_store_ps(result + i * n + j + 16, sum3);
_mm256_store_ps(result + i * n + j + 24, sum4);
_mm256_store_ps(result + i * n + j + 32, sum5);
_mm256_store_ps(result + i * n + j + 40, sum6);
_mm256_store_ps(result + i * n + j + 48, sum7);
_mm256_store_ps(result + i * n + j + 56, sum8);
}
}
}
}
}
``````

This final implementation is fast, and is probably as good as I am going to manage, without reading papers. This should be a CPU bound problem because the algorithm is O(n^3) whereas the problem size is O(n^2). But the flops/cycle decreases with problem size in all of these implementations. It’s possible that this could be amelioarated by a better dynamic tiling policy. I’m unlikely to be able to fix that.

It does make a huge difference being able to go very low level – handwritten intrinsics with GCC unlock awesome throughput – but it’s quite hard to actually get to the point where you can beat a good optimising compiler. Mind you, there are harder problems to solve this, and you may well be a domain expert.

The benchmark results summarise this best:

name size throughput (ops/s) flops/cycle
saxpy_avx 64 49225.7 9.92632
tiled_avx 64 33680.5 6.79165
tiled_avx_unrolled 64 127936 25.7981
saxpy_avx 128 5871.02 9.47109
tiled_avx 128 4210.07 6.79166
tiled_avx_unrolled 128 15997.6 25.8072
saxpy_avx 192 1603.84 8.73214
tiled_avx 192 1203.33 6.55159
tiled_avx_unrolled 192 4383.09 23.8638
saxpy_avx 256 633.595 8.17689
tiled_avx 256 626.157 8.0809
tiled_avx_unrolled 256 1792.52 23.1335
saxpy_avx 320 284.161 7.1626
tiled_avx 320 323.197 8.14656
tiled_avx_unrolled 320 935.571 23.5822
saxpy_avx 384 161.517 7.03508
tiled_avx 384 188.215 8.19794
tiled_avx_unrolled 384 543.235 23.6613
saxpy_avx 448 99.1987 6.86115
tiled_avx 448 118.588 8.2022
tiled_avx_unrolled 448 314 21.718
saxpy_avx 512 70.0296 7.23017
tiled_avx 512 73.2019 7.55769
tiled_avx_unrolled 512 197.815 20.4233
saxpy_avx 576 46.1944 6.79068
tiled_avx 576 50.6315 7.44294
tiled_avx_unrolled 576 126.045 18.5289
saxpy_avx 640 33.8209 6.81996
tiled_avx 640 37.0288 7.46682
tiled_avx_unrolled 640 92.784 18.7098
saxpy_avx 704 24.9096 6.68561
tiled_avx 704 27.7543 7.44912
tiled_avx_unrolled 704 69.0399 18.53
saxpy_avx 768 19.5158 6.80027
tiled_avx 768 21.532 7.50282
tiled_avx_unrolled 768 54.1763 18.8777
saxpy_avx 832 12.8635 5.69882
tiled_avx 832 14.6666 6.49766
tiled_avx_unrolled 832 37.9592 16.8168
saxpy_avx 896 12.0526 6.66899
tiled_avx 896 13.3799 7.40346
tiled_avx_unrolled 896 34.0838 18.8595
saxpy_avx 960 8.97193 6.10599
tiled_avx 960 10.1052 6.87725
tiled_avx_unrolled 960 21.0263 14.3098
saxpy_avx 1024 6.73081 5.55935
tiled_avx 1024 7.21214 5.9569
tiled_avx_unrolled 1024 12.7768 10.5531

### Can we do better in Java?

Writing genuinely fast code gives an indication of how little of the processor Java actually utilises, but is it possible to bring this knowledge over to Java? The `saxpy` based implementations in my previous post performed well for small to medium sized matrices. Once the matrices grow, however, they become too big to be allowed to pass through cache multiple times: we need hot, small cached data to be replenished from the larger matrix. Ideally we wouldn’t need to make any copies, but it seems that the autovectoriser doesn’t like offsets: `System.arraycopy` is a reasonably fast compromise. The basic sequential read pattern is validated: even native code requiring a gather does not perform well for this problem. The best effort C++ code translates almost verbatim into this Java code, which is quite fast for large matrices.

``````
public void tiled(float[] a, float[] b, float[] c, int n) {
final int bufferSize = 512;
final int width = Math.min(n, bufferSize);
final int height = Math.min(n, n >= 512 ? 8 : n >= 256 ? 16 : 32);
float[] sum = new float[bufferSize];
float[] vector = new float[bufferSize];
for (int rowOffset = 0; rowOffset < n; rowOffset += height) {
for (int columnOffset = 0; columnOffset < n; columnOffset += width) {
for (int i = 0; i < n; ++i) {
for (int j = columnOffset; j < columnOffset + width && j < n; j += width) {
int stride = Math.min(n - columnOffset, bufferSize);
// copy to give autovectorisation a hint
System.arraycopy(c, i * n + j, sum, 0, stride);
for (int k = rowOffset; k < rowOffset + height && k < n; ++k) {
float multiplier = a[i * n + k];
System.arraycopy(b, k * n  + j, vector, 0, stride);
for (int l = 0; l < stride; ++l) {
sum[l] = Math.fma(multiplier, vector[l], sum[l]);
}
}
System.arraycopy(sum, 0, c, i * n + j, stride);
}
}
}
}
}
``````

Benchmarking it using the same harness used in the previous post, the performance is ~10% higher for large arrays than my previous best effort. Still, the reality is that this is too slow to be useful. If you need to do linear algebra, use C/C++ for the time being!

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size flops/cycle
fastBuffered thrpt 1 10 53.331195 0.270526 ops/s 448 3.688688696
fastBuffered thrpt 1 10 34.365765 0.16641 ops/s 512 3.548072999
fastBuffered thrpt 1 10 26.128264 0.239719 ops/s 576 3.840914622
fastBuffered thrpt 1 10 19.044509 0.139197 ops/s 640 3.84031059
fastBuffered thrpt 1 10 14.312154 1.045093 ops/s 704 3.841312378
fastBuffered thrpt 1 10 7.772745 0.074598 ops/s 768 2.708411991
fastBuffered thrpt 1 10 6.276182 0.067338 ops/s 832 2.780495238
fastBuffered thrpt 1 10 4.8784 0.067368 ops/s 896 2.699343067
fastBuffered thrpt 1 10 4.068907 0.038677 ops/s 960 2.769160387
fastBuffered thrpt 1 10 2.568101 0.108612 ops/s 1024 2.121136502
tiled thrpt 1 10 56.495366 0.584872 ops/s 448 3.907540754
tiled thrpt 1 10 30.884954 3.221017 ops/s 512 3.188698735
tiled thrpt 1 10 15.580581 0.412654 ops/s 576 2.290381075
tiled thrpt 1 10 9.178969 0.841178 ops/s 640 1.850932038
tiled thrpt 1 10 12.229763 0.350233 ops/s 704 3.282408783
tiled thrpt 1 10 9.371032 0.330742 ops/s 768 3.265334889
tiled thrpt 1 10 7.727068 0.277969 ops/s 832 3.423271628
tiled thrpt 1 10 6.076451 0.30305 ops/s 896 3.362255222
tiled thrpt 1 10 4.916811 0.2823 ops/s 960 3.346215151
tiled thrpt 1 10 3.722623 0.26486 ops/s 1024 3.074720008

Posted on

#### Zeroing Negative Values in Arrays Efficiently

Replacing negatives with zeroes in large arrays of values is a primitive function of several complex financial risk measures, including potential future exposure (PFE) and the liquidity coverage ratio (LCR). While this is not an interesting operation by any stretch of the imagination, it is useful and there is significant benefit in its performance. This is an operation that can be computed very efficiently using the instruction `VMAXPD`. For Intel Xeon processors, this instruction requires half a cycle to calculate and has a latency (how long before another instruction can use its result) of four cycles. There is currently no way to trick Java into using this instruction for this simple operation, though there is a placeholder implementation on the current `DoubleVector` prototype in Project Panama which may do so.

### C++ Intel Intrinsics

It’s possible to target instructions from different processor vendors, in my case Intel, by using intrinsic functions which expose instructions as high level functions. The code looks incredibly ugly but it works. Here is a C++ function for 256 bit ymm registers:

``````
void zero_negatives(const double* source, double* target, const size_t length) {
for (size_t i = 0; i + 3 < length; i += 4) {
__m256d vector = _mm256_load_pd(source + i);
__m256d zeroed = _mm256_max_pd(vector, _mm256_setzero_pd());
_mm256_storeu_pd(target + i, zeroed);
}
}
``````

The function loads doubles into 256 bit vectors, within each vector replaces the negative values with zero, and writes them back into an array. It generates the following assembly code (which, incidentally, is less of a shit show to access than in Java):

``````
void zero_negatives(const double* source, double* target, const size_t length) {
00007FF746EE5110  mov         qword ptr [rsp+18h],r8
00007FF746EE5115  mov         qword ptr [rsp+10h],rdx
00007FF746EE511A  mov         qword ptr [rsp+8],rcx
00007FF746EE511F  push        r13
00007FF746EE5121  push        rbp
00007FF746EE5122  push        rdi
00007FF746EE5123  sub         rsp,250h
00007FF746EE512A  mov         r13,rsp
00007FF746EE512D  lea         rbp,[rsp+20h]
00007FF746EE5132  and         rbp,0FFFFFFFFFFFFFFE0h
00007FF746EE5136  mov         rdi,rsp
00007FF746EE5139  mov         ecx,94h
00007FF746EE513E  mov         eax,0CCCCCCCCh
00007FF746EE5143  rep stos    dword ptr [rdi]
00007FF746EE5145  mov         rcx,qword ptr [rsp+278h]
for (size_t i = 0; i + 3 < length; i += 4) {
00007FF746EE514D  mov         qword ptr [rbp+8],0
00007FF746EE5155  jmp         zero_negatives+53h (07FF746EE5163h)
00007FF746EE5157  mov         rax,qword ptr [rbp+8]
00007FF746EE515F  mov         qword ptr [rbp+8],rax
00007FF746EE5163  mov         rax,qword ptr [rbp+8]
00007FF746EE516B  cmp         rax,qword ptr [length]
00007FF746EE5172  jae         zero_negatives+0DDh (07FF746EE51EDh)
__m256d vector = _mm256_load_pd(source + i);
00007FF746EE5174  mov         rax,qword ptr [source]
00007FF746EE517B  mov         rcx,qword ptr [rbp+8]
00007FF746EE517F  lea         rax,[rax+rcx*8]
00007FF746EE5183  vmovupd     ymm0,ymmword ptr [rax]
00007FF746EE5187  vmovupd     ymmword ptr [rbp+180h],ymm0
00007FF746EE518F  vmovupd     ymm0,ymmword ptr [rbp+180h]
00007FF746EE5197  vmovupd     ymmword ptr [rbp+40h],ymm0
__m256d zeroed = _mm256_max_pd(vector, _mm256_setzero_pd());
00007FF746EE519C  vxorpd      xmm0,xmm0,xmm0
00007FF746EE51A0  vmovupd     ymmword ptr [rbp+200h],ymm0
00007FF746EE51A8  vmovupd     ymm0,ymmword ptr [rbp+40h]
00007FF746EE51B5  vmovupd     ymmword ptr [rbp+1C0h],ymm0
00007FF746EE51BD  vmovupd     ymm0,ymmword ptr [rbp+1C0h]
00007FF746EE51C5  vmovupd     ymmword ptr [rbp+80h],ymm0
_mm256_storeu_pd(target + i, zeroed);
00007FF746EE51CD  mov         rax,qword ptr [target]
00007FF746EE51D4  mov         rcx,qword ptr [rbp+8]
00007FF746EE51D8  lea         rax,[rax+rcx*8]
00007FF746EE51DC  vmovupd     ymm0,ymmword ptr [rbp+80h]
00007FF746EE51E4  vmovupd     ymmword ptr [rax],ymm0
}
00007FF746EE51E8  jmp         zero_negatives+47h (07FF746EE5157h)
}
00007FF746EE51ED  lea         rsp,[r13+250h]
00007FF746EE51F4  pop         rdi
00007FF746EE51F5  pop         rbp
00007FF746EE51F6  pop         r13
00007FF746EE51F8  ret
``````

This code is noticeably fast. I measured the throughput averaged over 1000 iterations, with an array of 10 million doubles (800MB) uniformly distributed between +/- 1E7, to quantify the throughput in GB/s and iterations/s. This code does between 4.5 and 5 iterations per second, which translates to processing approximately 4GB/s. This seems high, and since I am unaware of best practices in C++, if the measurement is flawed, I would gratefully be educated in the comments.

``````
void benchmark() {
const size_t length = 1E8;
double* values = new double[length];
fill_array(values, length);
double* zeroed = new double[length];
auto start = std::chrono::high_resolution_clock::now();
int iterations = 1000;
for (int i = 0; i < iterations; ++i) {
zero_negatives(values, zeroed, length);
}
auto end = std::chrono::high_resolution_clock::now();
auto nanos = std::chrono::duration_cast<std::chrono::nanoseconds>(end - start).count();
double thrpt_s = (iterations * 1E9) / nanos;
double thrpt_gbps = (thrpt_s * sizeof(double) * length) / 1E9;
std::cout << thrpt_s << "/s" << std::endl;
std::cout << thrpt_gbps << "GB/s" << std::endl;
delete[] values;
delete[] zeroed;
}
``````

While I am sure there are various ways an expert could tweak this for performance, this code can’t get much faster unless there are 512 bit zmm registers available, in which case it would be wasteful. While the code looks virtually the same for AVX512 (just replace “256” with “512”), portability and efficiency are at odds. Handling the mess of detecting the best instruction set for the deployed architecture is the main reason for using Java in performance sensitive (but not critical) applications. But this is not the code the JVM generates.

### Java Auto-Vectorisation (Play Your Cards Right)

There is currently no abstraction modelling vectorisation in Java. The only access available is if the compiler engineers implement an intrinsic, or auto-vectorisation, which will try, and sometimes succeed admirably, to translate your code to a good vector implementation. There is currently a prototype project for explicit vectorisation in Project Panama. There are a few ways to skin this cat, and it’s worth looking at the code they generate and the throughput available from each approach.

There is a choice between copying the array and zeroing out the negatives, and allocating a new array and only writing the non-negative values. There is another choice between an if statement and branchless code using `Math.max`. This results in the following four implementations which I measure on comparable data to the C++ benchmark (10 million doubles, normally distributed with mean zero). To be fair to the Java code, as in the C++ benchmarks, the cost of allocation is isolated by writing into an array pre-allocated once per benchmark. This penalises the approaches where the array is copied first and then zeroed wherever the value is negative. The code is online at github.

``````
@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE)
double[] data = state.data;
double[] result = state.target;
System.arraycopy(data, 0, result, 0, data.length);
for (int i = 0; i < result.length; ++i) {
if (result[i] < 0D) {
result[i] = 0D;
}
}
return result;
}

@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE)
public double[] BranchyNewArray(ArrayWithNegatives state) {
double[] data = state.data;
double[] result = state.target;
for (int i = 0; i < result.length; ++i) {
result[i] = data[i] < 0D ? 0D : data[i];
}
return result;
}

@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE)
public double[] NewArray(ArrayWithNegatives state) {
double[] data = state.data;
double[] result = state.target;
for (int i = 0; i < result.length; ++i) {
result[i] = Math.max(data[i], 0D);
}
return result;
}

@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE)
double[] data = state.data;
double[] result = state.target;
System.arraycopy(data, 0, result, 0, data.length);
for (int i = 0; i < result.length; ++i) {
result[i] = Math.max(result[i], 0D);
}
return result;
}
``````

None of these implementations comes close to the native code above. The best implementation performs 1.8 iterations per second which equates to processing approximately 1.4GB/s, vastly inferior to the 4GB/s achieved with Intel intrinsics. The results are below:

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit
BranchyCopyAndMask thrpt 1 10 1.314845 0.061662 ops/s
BranchyNewArray thrpt 1 10 1.802673 0.061835 ops/s
CopyAndMask thrpt 1 10 1.146630 0.018903 ops/s
NewArray thrpt 1 10 1.357020 0.116481 ops/s

As an aside, there is a very interesting observation to make, worthy of its own post: if the array consists only of positive values, the “branchy” implementations run very well, at speeds comparable to the `zero_negatives` (when it ran with 50% negatives). The ratio of branch hits to misses is an orthogonal explanatory variable, and the input data, while I often don’t think about it enough, is very important.

I only looked at the assembly emitted for the fastest version (BranchyNewArray) and it doesn’t look anything like `zero_negatives`, though it does use some vectorisation – as pointed out by Daniel Lemire in the comments, this code has probably not been vectorised and is probably using SSE2 (indeed only quad words are loaded into 128 bit registers):

```com/openkappa/simd/positive/PositiveValues.BranchyNewArray(Lcom/openkappa/simd/positive/ArrayWithNegatives;)[D  [0x000002ae309c3ce0, 0x000002ae309c3ff8]  792 bytes
Argument 0 is unknown.RIP: 0x2ae309c3ce0 Code size: 0x00000318
[Entry Point]
[Constants]
# {method} {0x000002ae4d13ec70} 'BranchyNewArray' '(Lcom/openkappa/simd/positive/ArrayWithNegatives;)[D' in 'com/openkappa/simd/positive/PositiveValues'
0x000002ae309c3ce0: mov     r10d,dword ptr [rdx+8h]  ;...44
;...8b
;...52
;...08

0x000002ae309c3ce4: shl     r10,3h            ;...49
;...c1
;...e2
;...03

0x000002ae309c3ce8: cmp     r10,rax           ;...4c
;...3b
;...d0

0x000002ae309c3ceb: jne     2ae3042c200h      ;...0f
;...85
;...0f
;...85
;...a6
;...ff
;   {runtime_call ic_miss_stub}
0x000002ae309c3cf1: nop     word ptr [rax+rax+0h]  ;...66
;...66
;...66
;...0f
;...1f
;...84
;...00
;...00
;...00
;...00
;...00

0x000002ae309c3cfc: nop                       ;...66
;...66
;...66
;...90

[Verified Entry Point]
0x000002ae309c3d00: mov     dword ptr [rsp+0ffffffffffff9000h],eax
;...89
;...84
;...24
;...00
;...90
;...ff
;...ff

0x000002ae309c3d07: push    rbp               ;...55

0x000002ae309c3d08: sub     rsp,60h           ;...48
;...83
;...ec
;...60

0x000002ae309c3d0c: mov     rcx,2ae4d163880h  ;...48
;...b9
;...80
;...38
;...16
;...4d
;...ae
;...02
;...00
;...00
;   {metadata(method data for {method} {0x000002ae4d13ec70} 'BranchyNewArray' '(Lcom/openkappa/simd/positive/ArrayWithNegatives;)[D' in 'com/openkappa/simd/positive/PositiveValues')}
0x000002ae309c3d16: mov     esi,dword ptr [rcx+0fch]
;...8b
;...b1
;...fc
;...00
;...00
;...00

;...c6
;...08

0x000002ae309c3d1f: mov     dword ptr [rcx+0fch],esi
;...89
;...b1
;...fc
;...00
;...00
;...00

0x000002ae309c3d25: and     esi,1ff8h         ;...81
;...e6
;...f8
;...1f
;...00
;...00

0x000002ae309c3d2b: cmp     esi,0h            ;...83
;...fe
;...00

0x000002ae309c3d2e: je      2ae309c3ec1h      ;...0f
;...84
;...8d
;...01
;...00
;...00
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@0 (line 29)

0x000002ae309c3d34: mov     edx,dword ptr [r8+0ch]  ;...41
;...8b
;...50
;...0c
; implicit exception: dispatches to 0x000002ae309c3ee2
0x000002ae309c3d38: shl     rdx,3h            ;...48
;...c1
;...e2
;...03
;*getfield data {reexecute=0 rethrow=0 return_oop=0}
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@1 (line 29)

0x000002ae309c3d3c: mov     ecx,dword ptr [r8+10h]  ;...41
;...8b
;...48
;...10

0x000002ae309c3d40: shl     rcx,3h            ;...48
;...c1
;...e1
;...03
;*getfield target {reexecute=0 rethrow=0 return_oop=0}
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@6 (line 30)

0x000002ae309c3d44: mov     esi,0h            ;...be
;...00
;...00
;...00
;...00

0x000002ae309c3d49: jmp     2ae309c3e27h      ;...e9
;...d9
;...00
;...00
;...00
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@13 (line 31)

0x000002ae309c3d4e: nop                       ;...66
;...90

0x000002ae309c3d50: movsxd  rax,esi           ;...48
;...63
;...c6

0x000002ae309c3d53: cmp     esi,dword ptr [rdx+0ch]  ;...3b
;...72
;...0c
; implicit exception: dispatches to 0x000002ae309c3ee7
0x000002ae309c3d56: jnb     2ae309c3ef1h      ;...0f
;...83
;...95
;...01
;...00
;...00

0x000002ae309c3d5c: vmovsd  xmm0,qword ptr [rdx+rax*8+10h]
;...c5
;...fb
;...10
;...44
;...c2
;...10
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@26 (line 32)

0x000002ae309c3d62: vxorpd  xmm1,xmm1,xmm1    ;...c5
;...f1
;...57
;...c9

0x000002ae309c3d66: vucomisd xmm0,xmm1        ;...c5
;...f9
;...2e
;...c1

0x000002ae309c3d6a: mov     eax,1h            ;...b8
;...01
;...00
;...00
;...00

0x000002ae309c3d6f: jp      2ae309c3d88h      ;...0f
;...8a
;...13
;...00
;...00
;...00

0x000002ae309c3d75: jnbe    2ae309c3d88h      ;...0f
;...87
;...0d
;...00
;...00
;...00

0x000002ae309c3d7b: mov     eax,0h            ;...b8
;...00
;...00
;...00
;...00

0x000002ae309c3d80: je      2ae309c3d88h      ;...0f
;...84
;...02
;...00
;...00
;...00

0x000002ae309c3d86: dec     eax               ;...ff
;...c8
;*dcmpg {reexecute=0 rethrow=0 return_oop=0}
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@28 (line 32)

0x000002ae309c3d88: cmp     eax,0h            ;...83
;...f8
;...00

0x000002ae309c3d8b: mov     rax,2ae4d163880h  ;...48
;...b8
;...80
;...38
;...16
;...4d
;...ae
;...02
;...00
;...00
;   {metadata(method data for {method} {0x000002ae4d13ec70} 'BranchyNewArray' '(Lcom/openkappa/simd/positive/ArrayWithNegatives;)[D' in 'com/openkappa/simd/positive/PositiveValues')}
0x000002ae309c3d95: mov     rdi,158h          ;...48
;...bf
;...58
;...01
;...00
;...00
;...00
;...00
;...00
;...00

0x000002ae309c3d9f: jnl     2ae309c3dafh      ;...0f
;...8d
;...0a
;...00
;...00
;...00

0x000002ae309c3da5: mov     rdi,168h          ;...48
;...bf
;...68
;...01
;...00
;...00
;...00
;...00
;...00
;...00

0x000002ae309c3daf: mov     rbx,qword ptr [rax+rdi]  ;...48
;...8b
;...1c
;...38

0x000002ae309c3db3: lea     rbx,[rbx+1h]      ;...48
;...8d
;...5b
;...01

0x000002ae309c3db7: mov     qword ptr [rax+rdi],rbx  ;...48
;...89
;...1c
;...38

0x000002ae309c3dbb: jnl     2ae309c3dd5h      ;...0f
;...8d
;...14
;...00
;...00
;...00
;*ifge {reexecute=0 rethrow=0 return_oop=0}
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@29 (line 32)

0x000002ae309c3dc1: mov     rax,2ae4d163880h  ;...48
;...b8
;...80
;...38
;...16
;...4d
;...ae
;...02
;...00
;...00
;   {metadata(method data for {method} {0x000002ae4d13ec70} 'BranchyNewArray' '(Lcom/openkappa/simd/positive/ArrayWithNegatives;)[D' in 'com/openkappa/simd/positive/PositiveValues')}
0x000002ae309c3dcb: inc     dword ptr [rax+178h]  ;...ff
;...80
;...78
;...01
;...00
;...00

0x000002ae309c3dd1: vxorpd  xmm0,xmm0,xmm0    ;...c5
;...f9
;...57
;...c0
;*goto {reexecute=0 rethrow=0 return_oop=0}
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@33 (line 32)

0x000002ae309c3dd5: movsxd  rax,esi           ;...48
;...63
;...c6

0x000002ae309c3dd8: cmp     esi,dword ptr [rcx+0ch]  ;...3b
;...71
;...0c

0x000002ae309c3ddb: jnb     2ae309c3efah      ;...0f
;...83
;...19
;...01
;...00
;...00

0x000002ae309c3de1: vmovsd  qword ptr [rcx+rax*8+10h],xmm0
;...c5
;...fb
;...11
;...44
;...c1
;...10
;*dastore {reexecute=0 rethrow=0 return_oop=0}
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@40 (line 32)

0x000002ae309c3de7: inc     esi               ;...ff
;...c6

0x000002ae309c3de9: mov     rax,2ae4d163880h  ;...48
;...b8
;...80
;...38
;...16
;...4d
;...ae
;...02
;...00
;...00
;   {metadata(method data for {method} {0x000002ae4d13ec70} 'BranchyNewArray' '(Lcom/openkappa/simd/positive/ArrayWithNegatives;)[D' in 'com/openkappa/simd/positive/PositiveValues')}
0x000002ae309c3df3: mov     edi,dword ptr [rax+100h]
;...8b
;...b8
;...00
;...01
;...00
;...00

;...c7
;...08

0x000002ae309c3dfc: mov     dword ptr [rax+100h],edi
;...89
;...b8
;...00
;...01
;...00
;...00

0x000002ae309c3e02: and     edi,0fff8h        ;...81
;...e7
;...f8
;...ff
;...00
;...00

0x000002ae309c3e08: cmp     edi,0h            ;...83
;...ff
;...00

0x000002ae309c3e0b: je      2ae309c3f03h      ;...0f
;...84
;...f2
;...00
;...00
;...00
; ImmutableOopMap{rcx=Oop rdx=Oop }
;*goto {reexecute=1 rethrow=0 return_oop=0}
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@44 (line 31)

0x000002ae309c3e11: test    dword ptr [2ae24520000h],eax
;...85
;...05
;...e9
;...c1
;...b5
;...f3
;   {poll}
0x000002ae309c3e17: mov     rax,2ae4d163880h  ;...48
;...b8
;...80
;...38
;...16
;...4d
;...ae
;...02
;...00
;...00
;   {metadata(method data for {method} {0x000002ae4d13ec70} 'BranchyNewArray' '(Lcom/openkappa/simd/positive/ArrayWithNegatives;)[D' in 'com/openkappa/simd/positive/PositiveValues')}
0x000002ae309c3e21: inc     dword ptr [rax+190h]  ;...ff
;...80
;...90
;...01
;...00
;...00
;*goto {reexecute=0 rethrow=0 return_oop=0}
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@44 (line 31)

0x000002ae309c3e27: mov     eax,dword ptr [rcx+0ch]  ;...8b
;...41
;...0c
;*arraylength {reexecute=0 rethrow=0 return_oop=0}
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@16 (line 31)
; implicit exception: dispatches to 0x000002ae309c3f24
0x000002ae309c3e2a: cmp     esi,eax           ;...3b
;...f0

0x000002ae309c3e2c: mov     rax,2ae4d163880h  ;...48
;...b8
;...80
;...38
;...16
;...4d
;...ae
;...02
;...00
;...00
;   {metadata(method data for {method} {0x000002ae4d13ec70} 'BranchyNewArray' '(Lcom/openkappa/simd/positive/ArrayWithNegatives;)[D' in 'com/openkappa/simd/positive/PositiveValues')}
0x000002ae309c3e36: mov     rdi,148h          ;...48
;...bf
;...48
;...01
;...00
;...00
;...00
;...00
;...00
;...00

0x000002ae309c3e40: jl      2ae309c3e50h      ;...0f
;...8c
;...0a
;...00
;...00
;...00

0x000002ae309c3e46: mov     rdi,138h          ;...48
;...bf
;...38
;...01
;...00
;...00
;...00
;...00
;...00
;...00

0x000002ae309c3e50: mov     rbx,qword ptr [rax+rdi]  ;...48
;...8b
;...1c
;...38

0x000002ae309c3e54: lea     rbx,[rbx+1h]      ;...48
;...8d
;...5b
;...01

0x000002ae309c3e58: mov     qword ptr [rax+rdi],rbx  ;...48
;...89
;...1c
;...38

0x000002ae309c3e5c: jl      2ae309c3d50h      ;...0f
;...8c
;...ee
;...fe
;...ff
;...ff
;*if_icmpge {reexecute=0 rethrow=0 return_oop=0}
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@17 (line 31)

0x000002ae309c3e62: mov     rax,rcx           ;...48
;...8b
;...c1

;...83
;...c4
;...60

0x000002ae309c3e69: pop     rbp               ;...5d

0x000002ae309c3e6a: test    dword ptr [2ae24520000h],eax
;...85
;...05
;...90
;...c1
;...b5
;...f3
;   {poll_return}
0x000002ae309c3e70: ret                       ;...c3
;*areturn {reexecute=0 rethrow=0 return_oop=0}
; - com.openkappa.simd.positive.PositiveValues::BranchyNewArray@48 (line 34)
```

I don’t really understand, and haven’t thought about, the intent of the emitted code, but it makes extensive use of the instruction `VUCOMISD` for comparisons with zero, which has a lower latency but lower throughput than `VMAXPD`. It would certainly be interesting to see how Project Panama does this. Perhaps this should just be made available as a fail-safe intrinsic like `Arrays.mismatch`?

Posted on