Base64 Encoding

Base64 encoding is the standard mechanism of converting binary data to text, and is used extensively in web technologies. You will encounter it wherever binary data such as authentication tokens or compressed BLOBs meet JSON.

There are two common Base64 encoding formats: a standard format and an URL-safe format which replaces ‘/’ with ‘_’ and ‘+’ with ‘-‘. Base64 uses ASCII characters which require just a byte each, but since each byte encodes six bits, the output is always 33% larger than the input. It’s a very simple two stage process to encode a sequence of bytes as text:

  1. For each three byte sequence shift each 6-bit subword onto 8-bit boundaries to produce four bytes
  2. For each of the four intermediate bytes, locate a character in a 64 element lookup table corresponding to the format being used.

It’s easy to implement:


    int i = 0;
    int j = 0;
    // 3 bytes in, 4 bytes out
    for (; j + 3 < out.length && i + 2 < in.length; i += 3, j += 4) {
      // map 3 bytes into the lower 24 bits of an integer
      int word = (in[i + 0] & 0xFF) << 16 | (in[i + 1] & 0xFF) << 8 | (in[i + 2] & 0xFF);
      // for each 6-bit subword, find the appropriate character in the lookup table
      out[j + 0] = ENCODING[(word >>> 18) & 0x3F];
      out[j + 1] = ENCODING[(word >>> 12) & 0x3F];
      out[j + 2] = ENCODING[(word >>> 6) & 0x3F];
      out[j + 3] = ENCODING[word & 0x3F];
    }

Since JDK8 there has been java.util.Base64.Encoder which benchmarks favourably against older library implementations. How does it work? The best way to find out is always JMH perfasm. Here’s the hottest part of the encoding loop for a byte[] (JDK10 on Ubuntu), annotated to explain what is going on.


                  ╭        0x00007f02993ad1e9: jge    0x00007f02993ad3ae  
                  │        0x00007f02993ad1ef: mov    %r13d,%edx
                  │        0x00007f02993ad1f2: xor    %r10d,%r10d
  0.01%    0.00%  │╭       0x00007f02993ad1f5: jmpq   0x00007f02993ad305
                  ││       0x00007f02993ad1fa: nopw   0x0(%rax,%rax,1)
  2.36%    2.08%  ││ ↗     0x00007f02993ad200: add    $0x4,%r10d         
  2.33%    2.11%  ││ │     0x00007f02993ad204: mov    %r8d,%esi          
  2.38%    2.17%  ││ │  ↗  0x00007f02993ad207: vmovq  %xmm0,%r11              ; overflow into FPU has occurred, move byte from SSE register just in time
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  2.34%    2.21%  ││ │  │  0x00007f02993ad20c: movzbl 0x10(%r11,%rsi,1),%r11d ; load the first input byte
  2.34%    2.21%  ││ │  │  0x00007f02993ad212: shl    $0x10,%r11d             ; shift the first input byte into the third byte of the intermediate dword (r11d)  
  2.43%    2.17%  ││ │  │  0x00007f02993ad216: mov    %esi,%r8d
  2.35%    2.34%  ││ │  │  0x00007f02993ad219: add    $0x3,%r8d          
  2.37%    2.20%  ││ │  │  0x00007f02993ad21d: movslq %esi,%rax
  2.36%    2.18%  ││ │  │  0x00007f02993ad220: mov    %r10d,%edi
  2.38%    2.69%  ││ │  │  0x00007f02993ad223: inc    %edi               
  2.38%    2.77%  ││ │  │  0x00007f02993ad225: vmovq  %xmm0,%rsi              ; floating point spill, move to rsi just in time to load the byte
  2.31%    2.71%  ││ │  │  0x00007f02993ad22a: movzbl 0x12(%rsi,%rax,1),%esi  ; load the third input byte 
  2.41%    2.77%  ││ │  │  0x00007f02993ad22f: vmovq  %xmm0,%rbp
  2.37%    2.26%  ││ │  │  0x00007f02993ad234: movzbl 0x11(%rbp,%rax,1),%eax  ; load the second input byte
  2.34%    2.19%  ││ │  │  0x00007f02993ad239: shl    $0x8,%eax               ; shift the second input byte into the second byte of r11d
  2.28%    2.17%  ││ │  │  0x00007f02993ad23c: or     %eax,%r11d              ; combine the second byte with the intermediate dword (disjoint union, byte 2 now in second byte of dword)
  2.39%    2.14%  ││ │  │  0x00007f02993ad23f: or     %esi,%r11d              ; combine the third byte with the intermediate word (disjoint union, byte 3 now in lower order bits of dword)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  2.28%    2.72%  ││ │  │  0x00007f02993ad242: mov    %r11d,%esi              ; encoding starts here, copies the intermediate word in esi to break dependency on r11d
  2.29%    2.67%  ││ │  │  0x00007f02993ad245: shr    $0x12,%esi              ; shift copy of dword right 18 (careful: 12 = $0xC :)) bits
  2.27%    2.78%  ││ │  │  0x00007f02993ad248: and    $0x3f,%esi              ; mask to get 6 bits (one position in lookup table computed and stored in esi now)   
  2.37%    2.79%  ││ │  │  0x00007f02993ad24b: movzwl 0x10(%r14,%rsi,2),%eax  ; load the character encoding the first byte
  2.54%    2.29%  ││ │  │  0x00007f02993ad251: cmp    %edx,%r10d
                  ││╭│  │  0x00007f02993ad254: jae    0x00007f02993ad469
  2.29%    2.22%  ││││  │  0x00007f02993ad25a: mov    %al,0x10(%r9,%r10,1)  
  2.43%    2.29%  ││││  │  0x00007f02993ad25f: mov    %r10d,%eax
  2.33%    2.16%  ││││  │  0x00007f02993ad262: add    $0x3,%eax
  2.34%    2.14%  ││││  │  0x00007f02993ad265: mov    %r11d,%esi
  2.37%    2.24%  ││││  │  0x00007f02993ad268: shr    $0xc,%esi               ; shift a copy of the dword right by 12 bits  
  2.36%    2.26%  ││││  │  0x00007f02993ad26b: and    $0x3f,%esi              ; mask it to get 6 bits
  2.26%    2.83%  ││││  │  0x00007f02993ad26e: movzwl 0x10(%r14,%rsi,2),%esi  ; load the character encoding for the second output byte
  2.37%    2.72%  ││││  │  0x00007f02993ad274: cmp    %edx,%eax
                  ││││  │  0x00007f02993ad276: jae    0x00007f02993ad4bd
  2.27%    2.80%  ││││  │  0x00007f02993ad27c: mov    %r11d,%edi              ; copy the intermediate word
  2.43%    2.81%  ││││  │  0x00007f02993ad27f: and    $0x3f,%edi              ; mask the lower 6 bits
  2.29%    2.12%  ││││  │  0x00007f02993ad282: movzwl 0x10(%r14,%rdi,2),%eax  ; load the character encoding for the fourth output byte
  2.36%    2.22%  ││││  │  0x00007f02993ad288: shr    $0x6,%r11d              ; shift the word 6 bits right
  2.31%    2.15%  ││││  │  0x00007f02993ad28c: and    $0x3f,%r11d             ; mask the lower 6 bits
  2.40%    2.26%  ││││  │  0x00007f02993ad290: movzwl 0x10(%r14,%r11,2),%r11d ; load the character encoding for third output byte 

The first thing to notice is there are several XMM registers used, but no vectorisation: these are floating point spills and their presence indicates register pressure. Hotspot stores variables in XMM registers to avoid storing the variable somewhere costlier to fetch from, but instructions for manipulating bytes and integers can’t take an XMM register as an operand, so the variable is always moved to an appropriate register just in time. Note that in the perfasm output, the target of the move from the XMM register is always used as an operand immediately after the move.

The middle section above is spent collecting the three input bytes into an integer, the time spent here is roughly one third of the total runtime of the method. Some of the section is executed in parallel: loading the input bytes is independent, but combining them is sequential. The intermediate integer is then copied several times before doing the encoding lookup, meaning the four lookups happen independently. Roughly 45% of the time is spent here.

Can this code be vectorised? Yes: Wojciech Mula has a blog post on this topic, and wrote a paper with Daniel Lemire. I read both of these references recently. Their approach is roughly ten times faster than their scalar baseline, which is virtually identical to the JDK implementation.

For AVX2 their algorithm starts by loading 32 bytes into a YMM register, but making sure that the 24 bytes of interest are loaded into the centre of the register, that is, the 4 bytes at either end are rubbish and will be ignored. This is achieved by loading at an offset of -4, which is quite problematic in a safe language like Java. The 24 bytes in the middle of the register are then permuted so some permutation (with duplication) of each 3 byte sequence is contained within a 4 byte lane.

load 24 bytes centred in a 256-bit register
|**-**-**-**|A1-A2-A3-B1|B2-B3-C1-C2|C3-D1-D2-D3|E1-E2-E3-F1|F2-F3-G1-G2|G3-H1-H2-H3|**-**-**-**|
permute the bytes of the LHS
|10-11- 9-10| 7- 8- 6- 7| 4- 5- 3- 4| 1- 2- 0- 1|
|**-**-**-**|A1-A2-A3-B1|B2-B3-C1-C2|C3-D1-D2-D3|E1-E2-E3-F1|F2-F3-G1-G2|G3-H1-H2-H3|**-**-**-**|
|A2-A3-A1-A2|B2-B3-B1-B2|C2-C3-C1-C2|D2-D3-D1-D2|E1-E2-E3-F1|F2-F3-G1-G2|G3-H1-H2-H3|**-**-**-**|
permute the bytes of the RHS 
                                                |14-15-13-14|11-12-10-11| 8- 9- 7- 8| 5- 6- 4- 5|
|A2-A1-A3-A2|B2-B1-B3-B2|C2-C1-C3-C2|D2-D1-D3-D2|E1-E2-E3-F1|F2-F3-G1-G2|G3-H1-H2-H3|**-**-**-**|
|A2-A3-A1-A2|B2-B3-B1-B2|C2-C3-C1-C2|D2-D3-D1-D2|E2-E3-E1-E2|F2-F3-F1-F2|G2-G3-G1-G2|H2-H3-H1-H2|

It looks like the ordering is wrong, and the second byte is always duplicated, but it’s intentional. Now the bytes are in the correct integer lanes, in order to use them as indexes into the encoding lookup table, the four 6-bit sequences need to be extracted by masking and shifting into the right positions.

Each 24-bit sequence has undergone the following transformation, where the letter indicates the 6-bit sequence, but the number indicates the bit number, with 8 bit lanes:

|         PREVIOUS      |           A1          |           A2          |          A3           |
|**-**-**-**-**-**-**-**|a8-a7-a6-a5-a4-a3-b2-b1|b8-b7-b6-b5-c4-c3-c2-c1|c8-c7-d6-d5-d4-d3-d2-d1|
|           A2          |           A3          |           A1          |          A2           |
|b8-b7-b6-b5-c4-c3-c2-c1|c8-c7-d6-d5-d4-d3-d2-d1|a8-a7-a6-a5-a4-a3-b2-b1|b8-b7-b6-b5-c4-c3-c2-c1|

Based on the input integer and its shuffled form above, the target structure of each integer prior to encoding is:

|**-**-**-**-**-**-**-**|a8-a7-a6-a5-a4-a3-b2-b1|b8-b7-b6-b5-c4-c3-c2-c1|c8-c7-d6-d5-d4-d3-d2-d1|
|b8-b7-b6-b5-c4-c3-c2-c1|c8-c7-d6-d5-d4-d3-d2-d1|a8-a7-a6-a5-a4-a3-b2-b1|b8-b7-b6-b5-c4-c3-c2-c1|
|00-00-d6-d5-d4-d3-d2-d1|00-00-c4-c3-c2-c1-c8-c7|00-00-b2-b1-b8-b7-b6-b5|00-00-a8-a7-a6-a5-a4-a3|

So, sequence c needs to move 6 bits right and sequence a must move right by 10 bits. AVX2 doesn’t make it possible to do both of these shifts at once. So the following operations could be performed:

mask a
| 0- 0- 0- 0- 0- 0- 0- 0| 0- 0- 0- 0- 0- 0- 0- 0| 1- 1- 1- 1- 1- 1- 0- 0| 0- 0- 0- 0- 0- 0- 0- 0|
|b8-b7-b6-b5-c4-c3-c2-c1|c8-c7-d6-d5-d4-d3-d2-d1|a8-a7-a6-a5-a4-a3-b2-b1|b8-b7-b6-b5-c4-c3-c2-c1|
|00-00-00-00-00-00-00-00|00-00-00-00-00-00-00-00|a8-a7-a6-a5-a4-a3-00-00|00-00-00-00-00-00-00-00|
shift right 10 bits
|00-00-00-00-00-00-00-00|00-00-00-00-00-00-00-00|00-00|00-00-00-00-00-00|00-00-a8-a7-a6-a5-a4-a3|
mask c
| 0- 0- 0- 0- 1- 1- 1- 1| 1- 1- 0- 0- 0- 0- 0- 0| 0- 0- 0- 0- 0- 0- 0- 0| 0- 0- 0- 0- 0- 0- 0- 0|
|b8-b7-b6-b5-c4-c3-c2-c1|c8-c7-d6-d5-d4-d3-d2-d1|a8-a7-a6-a5-a4-a3-b2-b1|b8-b7-b6-b5-c4-c3-c2-c1|
|00-00-00-00-c4-c3-c2-c1|c8-c7-00-00-00-00-00-00|00-00-00-00-00-00-00-00|00-00-00-00-00-00-00-00|
shift right 6 bits
|00-00-00-00-00-00-00-00|00-00-c4-c3-c2-c1-c8-c7|00-00-00-00-00-00-00-00|00-00-00-00-00-00-00-00|
union
|00-00-00-00-00-00-00-00|00-00-00-00-00-00-00-00|00-00|00-00-00-00-00-00|00-00-a8-a7-a6-a5-a4-a3|
|00-00-00-00-00-00-00-00|00-00-c4-c3-c2-c1-c8-c7|00-00-00-00-00-00-00-00|00-00-00-00-00-00-00-00|
|00-00-00-00-00-00-00-00|00-00-c4-c3-c2-c1-c8-c7|00-00-00-00-00-00-00-00|00-00-a8-a7-a6-a5-a4-a3|

However, that’s two masks, two shifts and a union, and needs several registers for temporary results. A single mask can be created by broadcasting 0x0fc0fc00 and two independent 16 bit shifts can be emulated in a single instruction with a special multiplication, using the semantic snowflake vpmulhuw, which does an unsigned 16-bit multiplication, storing the upper 16-bits.

Sequence b needs to shift left 4 bits, and sequence d needs to shift left 8 bits. Rather than use two separate masks and shifts, a single vpmullw, an unsigned multiplication outputting the lower 16 bits, achieves the shifts after masking with 0x003f03f0,. This result is united with the result of the first multiplication to get the correct output.

|00-00-d6-d5-d4-d3-d2-d1|00-00-c4-c3-c2-c1-c8-c7|00-00-b2-b1-b8-b7-b6-b5|00-00-a8-a7-a6-a5-a4-a3|

Now for the encoding itself! One approach would be to dump the content of the vector into a byte array and use scalar code to do the lookups just like the JDK implementation does, but this stage accounted for 45% of the execution time in the scalar implementation, so the encoding needs to use vector instructions too.

If base 64 used only characters in the range [0, 64), the lookup would be expressible as a closed permutation of the indices, albeit a permutation too large for AVX2. Performing the lookup as a permutation isn’t possible because several base 64 characters are larger than 64. However, the encoding character can be computed by adding an offset to the index, and there are only five distinct ranges. For each character within a contiguous range, the offset is the same, so there are only five possible offsets, so if a 3-bit value corresponding to an offset can be computed quickly from each 6-bit index, an offset vector can be looked up and simply added to the index vector. Then it’s just a question of finding and tuning a reduction to offset key.

This should be obvious from this table.

Position Character Decimal Offset Reduced Nibble
0 A 65 65 13
1 B 66 65 13
2 C 67 65 13
3 D 68 65 13
4 E 69 65 13
5 F 70 65 13
6 G 71 65 13
7 H 72 65 13
8 I 73 65 13
9 J 74 65 13
10 K 75 65 13
11 L 76 65 13
12 M 77 65 13
13 N 78 65 13
14 O 79 65 13
15 P 80 65 13
16 Q 81 65 13
17 R 82 65 13
18 S 83 65 13
19 T 84 65 13
20 U 85 65 13
21 V 86 65 13
22 W 87 65 13
23 X 88 65 13
24 Y 89 65 13
25 Z 90 65 13
26 a 97 71 0
27 b 98 71 0
28 c 99 71 0
29 d 100 71 0
30 e 101 71 0
31 f 102 71 0
32 g 103 71 0
33 h 104 71 0
34 i 105 71 0
35 j 106 71 0
36 k 107 71 0
37 l 108 71 0
38 m 109 71 0
39 n 110 71 0
40 o 111 71 0
41 p 112 71 0
42 q 113 71 0
43 r 114 71 0
44 s 115 71 0
45 t 116 71 0
46 u 117 71 0
47 v 118 71 0
48 w 119 71 0
49 x 120 71 0
50 y 121 71 0
51 z 122 71 0
52 0 48 -4 1
53 1 49 -4 2
54 2 50 -4 3
55 3 51 -4 4
56 4 52 -4 5
57 5 53 -4 6
58 6 54 -4 7
59 7 55 -4 8
60 8 56 -4 9
61 9 57 -4 10
62 + 43 -19 11
63 / 47 -16 12

The offset column is the value of the character minus the index, and the reduced nibble is a number computed by an efficient, if inelegant, sequence of vector operations that can be read about in the paper. Given that there are only five valid offsets, they could be specified by a three-bit value, and overspecified by a nibble, but a bit of redundancy allows a faster computation. The mapping is specified as follows: upper case letters to the number 13, lower case letters to zero, each number i to i + 1, and ‘+’ and ‘/’ to 11 and 12 respectively. These nibbles are then used as the input to a permutation using vpshufb to produce the appropriate offset to add to the index.

             
plus ------------------------------------------------------\         /------------------ forward slash
digits  -----------\***********************************\    \       /   /--------------- upper case
lower case -----\   \***********************************\    \     /   /   /***/-------- undefined 
Reduced Nibble [ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10,  11,  12, 13, 14, 15]
Offset         [71, -4, -4, -4, -4, -4, -4, -4, -4, -4, -4, -19, -16, 65,  0,  0]

A call to vpshufb with a vector of reduced nibbles and the permutation above as input produces an offset vector which is added to the indexes to encode a vector of 6-bit values.

Would it be possible to implement a permutation like this in the Vector API? I expect this will be too complex to be expressed precisely because it works around and therefore embraces vpshufb performing two independent 128-bit permutations, rather than a single 256-bit permutation. This could be achieved with two SSE 128-bit loads and permutes, but loading 256-bit vectors from pairs of 128-bit vectors is convoluted as things stand.

For the extraction step, I doubt the semantics of _mm256_mulhi_epi16 or _mm256_mullo_epi16 will ever be exposed to Java programmers, but it is possible to take the slow path and perform independent shifts. It just so happens that the calculation of the offset key relies on unsigned 8-bit arithmetic which does not exist in Java, but there are simpler but slower techniques to calculate the offset key. AVX2 is weird, with an abundance of unexpected capabilities alongside screaming feature gaps, and all the AVX2 code I read is teeming with ingenious hacks. I can imagine language designers being reticent to enshrine these peculiarities in a higher level language.

The real question is why bother? The fact that we use text so often moves the goalposts from copying memory to all of the complexity above. The fastest base 64 encoding is the one you don’t do.

I highly recommend reading Wojciech Mula’s blog if you are interested in what’s possible with vectorisation.

John Rose wrote an interesting response to the paragraphs about the Vector API in this post here.

Limiting Factors in a Dot Product Calculation

The dot product is ubiquitous in computing. The simple calculation is used heavily in neural networks for perceptron activation and again in gradient descent algorithms for error backpropagation. It is one of the building blocks of linear regression. Cosine similarity in document search is yet another dot product.

These use cases are more often implemented in C/C++ than in JVM languages, for reasons of efficiency, but what are the constraints on its computational performance? The combination of the computational simplicity and its streaming nature means the limiting factor in efficient code should be memory bandwidth. This is a good opportunity to look at the raw performance that will be made available with the vector API when it’s released.

Written in Java code since Java 9, the code to calculate a dot product is easy, using the Math.fma intrinsic.

  public float vanilla() {
    float sum = 0f;
    for (int i = 0; i < size; ++i) {
      sum = Math.fma(left[i], right[i], sum);
    }
    return sum;
  }

Despite its simplicity, this code is incredibly inefficient precisely because it’s written in Java. Java is a language which prizes portability, and this sometimes comes at the cost of performance. The only way to make this routine produce the same number given the same input, no matter what operating system or instruction sets are available, is to do the operations in the same order, which means no unrolling or vectorisation. For a web application, this a good trade off, but for data analytics it is not.

An estimate of intensity, assuming a constant processor frequency, and two floating point operations (flops) per FMA, shows that the intensity is constant but very low at 0.67 flops/cycle. There being constant intensity as a function of array size is interesting because it indicates that the performance is insensitive to cache hierarchy, that the the limit is the CPU. Daniel Lemire made this observation with a benchmark written in C, disabling fastmath compiler optimisations, recently.

The JLS’s view on floating point arithmetic is the true limiting factor here. Assuming you really care about dot product performance, the best you can do to opt out is to unroll the loop and get slightly higher throughput.

  public float unrolled() {
    float s0 = 0f;
    float s1 = 0f;
    float s2 = 0f;
    float s3 = 0f;
    float s4 = 0f;
    float s5 = 0f;
    float s6 = 0f;
    float s7 = 0f;
    for (int i = 0; i < size; i += 8) {
      s0 = Math.fma(left[i + 0],  right[i + 0], s0);
      s1 = Math.fma(left[i + 1],  right[i + 1], s1);
      s2 = Math.fma(left[i + 2],  right[i + 2], s2);
      s3 = Math.fma(left[i + 3],  right[i + 3], s3);
      s4 = Math.fma(left[i + 4],  right[i + 4], s4);
      s5 = Math.fma(left[i + 5],  right[i + 5], s5);
      s6 = Math.fma(left[i + 6],  right[i + 6], s6);
      s7 = Math.fma(left[i + 7],  right[i + 7], s7);
    }
    return s0 + s1 + s2 + s3 + s4 + s5 + s6 + s7;
  }

The intensity is about 4x better, but still constant. My Intel Skylake processor is capable of 32 flops/cycle, so this code is clearly still not very efficient, but it’s actually the best you can do with any released version of OpenJDK at the time of writing.

The Vector API

I have been keeping an eye on the Vector API incubating in Project Panama for some time, and have only recently got round to kicking the tires. I wrote some benchmarks earlier in the year but ran into, as one should expect of a project in active development, bugs in FMA and vector box elimination. This limited the value I would get from writing about the benchmarks. These bugs have been fixed for a long time now, and you can start to see the how good this API is going to be.

Here’s a simple implementation which wouldn’t be legal for C2 (or Graal for that matter) to generate from the dot product loop. It relies on an accumulator vector, into which a vector dot product of the next eight elements is FMA’d for each step of the loop.

  public float vector() {
    var sum = YMM_FLOAT.zero();
    for (int i = 0; i < size; i += YMM_FLOAT.length()) {
      var l = YMM_FLOAT.fromArray(left, i);
      var r = YMM_FLOAT.fromArray(right, i);
      sum = l.fma(r, sum);
    }
    return sum.addAll();
  }

This loop can be unrolled, but it seems that this must be done manually for the sake of stability. The unroll below uses four accumulators and results in a huge boost in throughput.

  private float vectorUnrolled() {
    var sum1 = YMM_FLOAT.zero();
    var sum2 = YMM_FLOAT.zero();
    var sum3 = YMM_FLOAT.zero();
    var sum4 = YMM_FLOAT.zero();
    int width = YMM_FLOAT.length();
    for (int i = 0; i < size; i += width * 4) {
      sum1 = YMM_FLOAT.fromArray(left, i).fma(YMM_FLOAT.fromArray(right, i), sum1);
      sum2 = YMM_FLOAT.fromArray(left, i + width).fma(YMM_FLOAT.fromArray(right, i + width), sum2);
      sum3 = YMM_FLOAT.fromArray(left, i + width * 2).fma(YMM_FLOAT.fromArray(right, i + width * 2), sum3);
      sum4 = YMM_FLOAT.fromArray(left, i + width * 3).fma(YMM_FLOAT.fromArray(right, i + width * 3), sum4);
    }
    return sum1.addAll() + sum2.addAll() + sum3.addAll() + sum4.addAll();
  }

This plot doesn’t quite do justice to how large the difference is and you could be forgiven for thinking the performance converges. In fact, presenting the data like this is a great way to mislead people! The absolute difference narrows, but the relative performance is more or less constant. Looking at intensity gives a much better picture and is size invariant (until memory bandwidth is saturated).

The first thing to notice is that the intensity gets nowhere near 32 flops/cycle, and that’s because my chip can’t load data fast enough to keep the two FMA ports busy. Skylake chips can do two loads per cycle, which is enough for one FMA between two vectors and the accumulator. Since the arrays are effectively streamed, there is no chance to reuse any loads, so the absolute maximum intensity is 50% capacity, or just 16 flops/cycle.

In the unrolled vector code, the intensity hits 12 flops/cycle just before 4096 elements. 4096 is a special number because 2 * 4096 * 4 = 32kB is the capacity of L1 cache. This peak and rapid decrease suggests that the code is fast enough to be hitting memory bandwidth: if L1 were larger or L2 were faster, the intensity could be sustained. This is great, and the performance counters available with -prof perfnorm corroborate.

In the vanilla loop and unrolled loop, the cycles per instruction (CPI) reaches a maximum long before the arrays breach L1 cache. The latency of an instruction depends on where its operands come from, increasing the further away from L1 cache the data comes from. If CPI for arrays either side of the magical 4096 element threshold is the same, then memory cannot be the limiting factor. The unrolled vector loop show a very sharp increase, suggesting a strong dependency on load speed. Similarly, L1-dcache-load-misses can be seen to increase sharply once the arrays are no longer L1 resident (predictably) correlated with a drop in intensity only in the vector unrolled implementation. It’s short lived, but the unrolled vector code, albeit with bounds checks disabled, is efficient enough for the CPU not to be the bottleneck.

Take a look at my benchmarks and raw data.. The JDK used was built from the vectorIntrinsics branch of the Project Panama OpenJDK fork, run with JMH 1.20 on Ubuntu 16.04 LTS, on a 4 core i7-6700HQ processor.

Vectorised Algorithms in Java

There has been a Cambrian explosion of JVM data technologies in recent years. It’s all very exciting, but is the JVM really competitive with C in this area? I would argue that there is a reason Apache Arrow is polyglot, and it’s not just interoperability with Python. To pick on one project impressive enough to be thriving after seven years, if you’ve actually used Apache Spark you will be aware that it looks fastest next to its predecessor, MapReduce. Big data is a lot like teenage sex: everybody talks about it, nobody really knows how to do it, and everyone keeps their embarrassing stories to themselves. In games of incomplete information, it’s possible to overestimate the competence of others: nobody opens up about how slow their Spark jobs really are because there’s a risk of looking stupid.

If it can be accepted that Spark is inefficient, the question becomes is Spark fundamentally inefficient? Flare provides a drop-in replacement for Spark’s backend, but replaces JIT compiled code with highly efficient native code, yielding order of magnitude improvements in job throughput. Some of Flare’s gains come from generating specialised code, but the rest comes from just generating better native code than C2 does. If Flare validates Spark’s execution model, perhaps it raises questions about the suitability of the JVM for high throughput data processing.

I think this will change radically in the coming years. I think the most important reason is the advent of explicit support for SIMD provided by the vector API, which is currently incubating in Project Panama. Once the vector API is complete, I conjecture that projects like Spark will be able to profit enormously from it. This post takes a look at the API in its current state and ignores performance.

Why Vectorisation?

Assuming a flat processor frequency, throughput is improved by a combination of executing many instructions per cycle (pipelining) and processing multiple data items per instruction (SIMD). SIMD instruction sets are provided by Intel as the various generations of SSE and AVX. If throughput is the only goal, maximising SIMD may even be worth reducing the frequency, which can happen on Intel chips when using AVX. Vectorisation allows throughput to be increased by the use of SIMD instructions.

Analytical workloads are particularly suitable for vectorisation, especially over columnar data, because they typically involve operations consuming the entire range of a few numerical attributes of a data set. Vectorised analytical processing with filters is explicitly supported by vector masks, and vectorisation is also profitable for operations on indices typically performed for filtering prior to calculations. I don’t actually need to make a strong case for the impact of vectorisation on analytical workloads: just read the work of top researchers like Daniel Abadi and Daniel Lemire.

Vectorisation in the JVM

C2 provides quite a lot of autovectorisation, which works very well sometimes, but the support is limited and brittle. I have written about this several times. Because AVX can reduce the processor frequency, it’s not always profitable to vectorise, so compilers employ cost models to decide when they should do so. Such cost models require platform specific calibration, and sometimes C2 can get it wrong. Sometimes, specifically in the case of floating point operations, using SIMD conflicts with the JLS, and the code C2 generates can be quite inefficient. In general, data parallel code can be better optimised by C compilers, such as GCC, than C2 because there are fewer constraints, and there is a larger budget for analysis at compile time. This all makes having intrinsics very appealing, and as a user I would like to be able to:

  1. Bypass JLS floating point constraints.
  2. Bypass cost model based decisions.
  3. Avoid JNI at all costs.
  4. Use a modern “object-functional” style. SIMD intrinsics in C are painful.

There is another attempt to provide SIMD intrinsics to JVM users via LMS, a framework for writing programs which write programs, designed by Tiark Rompf (who is also behind Flare). This work is very promising (I have written about it before), but it uses JNI. It’s only at the prototype stage, but currently the intrinsics are auto-generated from XML definitions, which leads to a one-to-one mapping to the intrinsics in immintrin.h, yielding a similar programming experience. This could likely be improved a lot, but the reliance on JNI is fundamental, albeit with minimal boundary crossing.

I am quite excited by the vector API in Project Panama because it looks like it will meet all of these requirements, at least to some extent. It remains to be seen quite how far the implementors will go in the direction of associative floating point arithmetic, but it has to opt out of JLS floating point semantics to some extent, which I think is progressive.

The Vector API

Disclaimer: Everything below is based on my experience with a recent build of the experimental code in the Project Panama fork of OpenJDK. I am not affiliated with the design or implementation of this API, may not be using it properly, and it may change according to its designers’ will before it is released!

To understand the vector API you need to know that there are different register widths and different SIMD instruction sets. Because of my area of work, and 99% of the server market is Intel, I am only interested in AVX, but ARM have their own implementations with different maximum register sizes, which presumably need to be handled by a JVM vector API. On Intel CPUs, SSE instruction sets use up to 128 bit registers (xmm, four ints), AVX and AVX2 use up to 256 bit registers (ymm, eight ints), and AVX512 use up to 512 bit registers (zmm, sixteen ints).

The instruction sets are typed, and instructions designed to operate on packed doubles can’t operate on packed ints without explicit casting. This is modeled by the interface Vector<Shape>, parametrised by the Shape interface which models the register width.

The types of the vector elements is modeled by abstract element type specific classes such as IntVector. At the leaves of the hierarchy are the concrete classes specialised both to element type and register width, such as IntVector256 which extends IntVector<Shapes.S256Bit>.

Since EJB, the word factory has been a dirty word, which might be why the word species is used in this API. To create a IntVector<Shapes.S256Bit>, you can create the factory/species as follows:

public static final IntVector.IntSpecies<Shapes.S256Bit> YMM_INT = 
          (IntVector.IntSpecies<Shapes.S256Bit>) Vector.species(int.class, Shapes.S_256_BIT);

There are now various ways to create a vector from the species, which all have their use cases. First, you can load vectors from arrays: imagine you want to calculate the bitwise intersection of two int[]s. This can be written quite cleanly, without any shape/register information.


public static int[] intersect(int[] left, int[] right) {
    assert left.length == right.length;
    int[] result = new int[left.length];
    for (int i = 0; i < left.length; i += YMM_INT.length()) {
      YMM_INT.fromArray(left, i)
             .and(YMM_INT.fromArray(right, i))
             .intoArray(result, i);
    }
}

A common pattern in vectorised code is to broadcast a variable into a vector, for instance, to facilitate the multiplication of a vector by a scalar.

IntVector<Shapes.S256Bit> multiplier = YMM_INT.broadcast(x);

Or to create a vector from some scalars, for instance in a lookup table.

IntVector<Shapes.S256Bit> vector = YMM_INT.scalars(0, 1, 2, 3, 4, 5, 6, 7);

A zero vector can be created from a species:

IntVector<Shapes.S256Bit> zero = YMM_INT.zero();

The big split in the class hierarchy is between integral and floating point types. Integral types have meaningful bitwise operations (I am looking forward to trying to write a vectorised population count algorithm), which are absent from FloatVector and DoubleVector, and there is no concept of fused-multiply-add for integral types, so there is obviously no IntVector.fma. The common subset of operations is arithmetic, casting and loading/storing operations.

I generally like the API a lot: it feels familiar to programming with streams, but on the other hand, it isn’t too far removed from traditional intrinsics. Below is an implementation of a fast matrix multiplication written in C, and below it is the same code written with the vector API:


static void mmul_tiled_avx_unrolled(const int n, const float *left, const float *right, float *result) {
    const int block_width = n >= 256 ? 512 : 256;
    const int block_height = n >= 512 ? 8 : n >= 256 ? 16 : 32;
    for (int column_offset = 0; column_offset < n; column_offset += block_width) {
        for (int row_offset = 0; row_offset < n; row_offset += block_height) {
            for (int i = 0; i < n; ++i) {
                for (int j = column_offset; j < column_offset + block_width && j < n; j += 64) {
                    __m256 sum1 = _mm256_load_ps(result + i * n + j);
                    __m256 sum2 = _mm256_load_ps(result + i * n + j + 8);
                    __m256 sum3 = _mm256_load_ps(result + i * n + j + 16);
                    __m256 sum4 = _mm256_load_ps(result + i * n + j + 24);
                    __m256 sum5 = _mm256_load_ps(result + i * n + j + 32);
                    __m256 sum6 = _mm256_load_ps(result + i * n + j + 40);
                    __m256 sum7 = _mm256_load_ps(result + i * n + j + 48);
                    __m256 sum8 = _mm256_load_ps(result + i * n + j + 56);
                    for (int k = row_offset; k < row_offset + block_height && k < n; ++k) {
                        __m256 multiplier = _mm256_set1_ps(left[i * n + k]);
                        sum1 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j), sum1);
                        sum2 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 8), sum2);
                        sum3 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 16), sum3);
                        sum4 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 24), sum4);
                        sum5 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 32), sum5);
                        sum6 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 40), sum6);
                        sum7 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 48), sum7);
                        sum8 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 56), sum8);
                    }
                    _mm256_store_ps(result + i * n + j, sum1);
                    _mm256_store_ps(result + i * n + j + 8, sum2);
                    _mm256_store_ps(result + i * n + j + 16, sum3);
                    _mm256_store_ps(result + i * n + j + 24, sum4);
                    _mm256_store_ps(result + i * n + j + 32, sum5);
                    _mm256_store_ps(result + i * n + j + 40, sum6);
                    _mm256_store_ps(result + i * n + j + 48, sum7);
                    _mm256_store_ps(result + i * n + j + 56, sum8);
                }
            }
        }
    }
}


  private static void mmul(int n, float[] left, float[] right, float[] result) {
    int blockWidth = n >= 256 ? 512 : 256;
    int blockHeight = n >= 512 ? 8 : n >= 256 ? 16 : 32;
    for (int columnOffset = 0; columnOffset < n; columnOffset += blockWidth) {
      for (int rowOffset = 0; rowOffset < n; rowOffset += blockHeight) {
        for (int i = 0; i < n; ++i) {
          for (int j = columnOffset; j < columnOffset + blockWidth && j < n; j += 64) {
            var sum1 = YMM_FLOAT.fromArray(result, i * n + j);
            var sum2 = YMM_FLOAT.fromArray(result, i * n + j + 8);
            var sum3 = YMM_FLOAT.fromArray(result, i * n + j + 16);
            var sum4 = YMM_FLOAT.fromArray(result, i * n + j + 24);
            var sum5 = YMM_FLOAT.fromArray(result, i * n + j + 32);
            var sum6 = YMM_FLOAT.fromArray(result, i * n + j + 40);
            var sum7 = YMM_FLOAT.fromArray(result, i * n + j + 48);
            var sum8 = YMM_FLOAT.fromArray(result, i * n + j + 56);
            for (int k = rowOffset; k < rowOffset + blockHeight && k < n; ++k) {
              var multiplier = YMM_FLOAT.broadcast(left[i * n + k]);
              sum1 = multiplier.fma(YMM_FLOAT.fromArray(right, k * n + j), sum1);
              sum2 = multiplier.fma(YMM_FLOAT.fromArray(right, k * n + j + 8), sum2);
              sum3 = multiplier.fma(YMM_FLOAT.fromArray(right, k * n + j + 16), sum3);
              sum4 = multiplier.fma(YMM_FLOAT.fromArray(right, k * n + j + 24), sum4);
              sum5 = multiplier.fma(YMM_FLOAT.fromArray(right, k * n + j + 32), sum5);
              sum6 = multiplier.fma(YMM_FLOAT.fromArray(right, k * n + j + 40), sum6);
              sum7 = multiplier.fma(YMM_FLOAT.fromArray(right, k * n + j + 48), sum7);
              sum8 = multiplier.fma(YMM_FLOAT.fromArray(right, k * n + j + 56), sum8);
            }
            sum1.intoArray(result, i * n + j);
            sum2.intoArray(result, i * n + j + 8);
            sum3.intoArray(result, i * n + j + 16);
            sum4.intoArray(result, i * n + j + 24);
            sum5.intoArray(result, i * n + j + 32);
            sum6.intoArray(result, i * n + j + 40);
            sum7.intoArray(result, i * n + j + 48);
            sum8.intoArray(result, i * n + j + 56);
          }
        }
      }
    }
  }

They just aren’t that different, and it’s easy to translate between the two. I wouldn’t expect it to be fast yet though. I have no idea what the scope of work involved in implementing all of the C2 intrinsics to make this possible is, but I assume it’s vast. The class jdk.incubator.vector.VectorIntrinsics seems to contain all of the intrinsics implemented so far, and it doesn’t contain every operation used in my array multiplication code. There is also the question of value types and vector box elimination. I will probably look at this again in the future when more of the JIT compiler work has been done, but I’m starting to get very excited about the possibility of much faster JVM based data processing.

I have written various benchmarks for useful analytical subroutines using the Vector API at github.

Garbage Collector Code Artifacts: Card Marking

In the JVM, lots of evidence of garbage collection mechanics can be seen from JIT compiler output. This may be obvious if you think of garbage collection as a task of book-keeping: the various auxiliary data structures used to track inter-region or inter-generational references, relied on for faster marking, need to be kept up to date somehow. These data structures need maintenance, and this isn’t something you control in application code: the maintenance aspect must must be instrumented somehow. If you profile your application’s disassembly, you can find artifacts of the various garbage collectors, and these snippets of code can help you understand the throughput tradeoffs of each collector. You can also just read the documentation.

A simple benchmark to illustrate this would compare the store of a primitive int and a boxed Integer. It may not be surprising that the classes below can be JIT compiled in very different ways, and that the real difference depends on the selected garbage collector.

public class IntAcceptor {
  private int value;

  public void setValue(int value) {
    this.value = value;
  }
}

public class IntegerAcceptor {
  private Integer value;

  public void setValue(Integer value) {
    this.value = value;
  }
}

For instance, the simplest garbage collector, used mostly by specialist applications betting against garbage collection actually happening, is enabled by -XX:+UseSerialGC. If you benchmark throughput for storing these values, you will observe that storing ints is cheaper than storing Integers.

It’s difficult to measure this accurately because there are numerous pitfalls. If you allocate a new Integer for each store, you conflate your measurement with allocation and introduce bias towards the primitive store. If you pre-allocate an Integer[] you can make the measured workload more cache-friendly, from a GC book-keeping point of view, which helps reduce the reference store cost. In a multithreaded context, this same property can create bias in the opposite direction. JMH can’t prevent any of these biases. Be skeptical about the accuracy or generality of the numbers here (because there are a large number of unexplored dimensions to control, as I have alluded to) but you would hardly notice any difference in a single threaded benchmark storing the same boxed integer repeatedly.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit
SerialGCStoreBenchmark.storeInt thrpt 1 20 395.370723 10.092432 ops/us
SerialGCStoreBenchmark.storeInteger thrpt 1 20 277.329797 18.036629 ops/us

You may see a large difference in a multithreaded benchmark, with an Integer instance per thread.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit
SerialGCStoreBenchmark.storeInt thrpt 4 20 1467.401084 5.917960 ops/us
SerialGCStoreBenchmark.storeInteger thrpt 4 20 793.880064 459.304449 ops/us

The throughput of storeInteger seems to have a large error term, here are the iteration figures:

Iteration   1: 1176.474 ops/us
Iteration   2: 85.966 ops/us
Iteration   3: 1180.612 ops/us
Iteration   4: 90.930 ops/us
Iteration   5: 1180.955 ops/us
Iteration   6: 1181.966 ops/us
Iteration   7: 88.801 ops/us
Iteration   8: 1180.723 ops/us
Iteration   9: 1177.895 ops/us
Iteration  10: 1138.446 ops/us
Iteration  11: 1177.302 ops/us
Iteration  12: 91.551 ops/us
Iteration  13: 1144.591 ops/us
Iteration  14: 102.143 ops/us
Iteration  15: 1179.683 ops/us
Iteration  16: 1184.222 ops/us
Iteration  17: 85.365 ops/us
Iteration  18: 1183.874 ops/us
Iteration  19: 95.979 ops/us
Iteration  20: 1150.123 ops/us

This is bimodal, varying from iteration to iteration between almost as good to an order of magnitude slower, with nothing in between. If you compare the disassembly for loops setting distinct values, such as in my simplistic benchmark, you will see the assembly is virtually identical, but you’ll notice these instructions for the reference stores:


  0.98%    1.12%  │  0x00007f54a96462ee: shr    $0x9,%r10
  2.22%    2.17%  │  0x00007f54a96462f2: movabs $0x7f54c1bc5000,%r11
  2.30%    2.69%  │  0x00007f54a96462fc: mov    %r12b,(%r11,%r10,1) 

This code does card marking, which tracks bucketed references between different sections of the heap. The byte array is the card table, which has logical pages of 512 bytes. The right shift divides the reference of the stored object by 512 to get the card it resides in. The byte at this index offset by the base address of the page tracking references out of the storing object’s card is written to. In other words, a directed link is established between the storing object’s page and stored object’s page. This is what you would see if you squinted at the heap: the card table is a coarse approximation of the object graph which allows false positives (referenced pages may contain dead references) but no false negatives.

The writes to the card table are volatile, and the card table is shared between threads, which can induce false sharing when objects in adjacent pages are stored in objects residing in the same page, and the stores happen on different threads. You can use conditional marking to avoid this because the stored object’s page is often already marked. The bimodal behaviour is caused by unlucky combinations of addresses resulting in false sharing of the card table. It doesn’t even happen all the time. Setting the -XX:+UseCondCardMark the difference gets much smaller, the noise disappears, and conditional marking logic can be seen in the disassembly.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit
SerialGCStoreBenchmark.storeInt thrpt 4 20 1467.464828 12.866720 ops/us
SerialGCStoreBenchmark.storeInteger thrpt 4 20 1114.612419 6.960193 ops/us


                  ╭││  0x00007f003164b9e4: je     0x00007f003164ba04 
                  │││                                                
                  │││                                               
  0.01%    0.00%  │││  0x00007f003164b9e6: mov    %r10,%r8
  4.92%    3.54%  │││  0x00007f003164b9e9: shr    $0x9,%r8
  0.01%    0.00%  │││  0x00007f003164b9ed: movabs $0x7f0048492000,%r9
  3.48%    2.12%  │││  0x00007f003164b9f7: add    %r8,%r9
  0.02%    0.01%  │││  0x00007f003164b9fa: movsbl (%r9),%ecx
  6.51%    6.53%  │││  0x00007f003164b9fe: test   %ecx,%ecx
  1.71%    1.85%  │╰│  0x00007f003164ba00: jne    0x00007f003164b994
                  │ │                                               
                  │ │                                               
                  │ │                                               
  4.76%    5.29%  │ ╰  0x00007f003164ba02: jmp    0x00007f003164b9a0
                  ↘    0x00007f003164ba04: mov    $0xfffffff6,%esi

I intended to provoke this behaviour, but what if I had been measuring something else and hadn’t ensured conditional marking was enabled?

Card marking is common in older garbage collectors because it has low overhead, particularly with conditional marking, but different collectors intercept stores differently, and you can reverse engineer them all without reading the source code. In fact, Nitsan Wakart has written a great post all about store barriers.

The point of this post is that you can detect garbage collector mechanisms with benchmarks, you just need to write them to provoke the actions you think a garbage collector should make, and look for crop circles in the disassembly. However, this assumes you have some kind of mental model of a garbage collector to start with! The new ones are getting very creative, and you might not be able to guess what they do. In principle, garbage collector implementations could modify any application code so these artifacts could be anywhere.

Parallel Bitmap Aggregation

A bitmap index represents predicates over records as sets consisting of the integer identities of each record satisfying each predicate. This representation is actually a few decades out of date, and systems like Pilosa use much more sophisticated data structures, and Sybase had even more on offer back in the 90s. But the chances are, if you’ve rolled your own bitmap index, you’ve used equality encoding and have a bitmap per indexed predicate. RoaringBitmap is a great choice for the bitmaps used in this kind of data structure, because it offers a good tradeoff between bitmap compression and performance. It’s also succinct, that is, you don’t need to decompress the structure in order to operate on it. With the naive index structure described, it’s likely that you have many bitmaps to aggregate (union, intersection, difference, and so on) when you want to query your index.

RoaringBitmap provides a class FastAggregation for aggregations, and the method FastAggregation.and is incredibly fast, particularly given its apparent simplicity. This reflects a nice property of set intersection, that the size of the intersection cannot increase and tends to get smaller as the number of sets increases. Unions and differences are different: the problem size tends to increase in magnitude as the number of contributing sets increases. While FastAggregation.or and FastAggregation.xor are highly optimised, not a lot can be done about the fact each additional set makes the problem bigger. So it may be worth throwing some threads at the problem, and this gets more attractive as you add more dimensions to your index. You can, of course, completely bypass this need by reading some database research and sharing bitmaps between overlapping predicates.

I implemented the class ParallelAggregation in RoaringBitmap, but I’m not convinced the technique used performs as well as it could do. RoaringBitmap stores the 16 bit prefix of each integer in a sorted array, with the rest of each integer in that 16 bit range stored in a container at the same index in another array. This makes the structure very easy to split. The implementation I worked on seeks to exploit this by grouping all the containers by common key as a SortedMap<Short, List<Container>> before executing each reduction (i.e. Function<List<Container>, Container>) in parallel in a ForkJoinPool. This results in a reasonable speedup of 2x-6.5x compared to FastAggregation on an 8 core machine, but it uses quite a lot of temporary memory just to set the problem up. I don’t think it should be possible to beat this approach without grouping the containers by key somehow, but I suspect there are lighter weight approaches which use less memory and give better throughput. Perhaps this would be an interesting problem to work on?