Base64 Encoding

Base64 encoding is the standard mechanism of converting binary data to text, and is used extensively in web technologies. You will encounter it wherever binary data such as authentication tokens or compressed BLOBs meet JSON.

There are two common Base64 encoding formats: a standard format and an URL-safe format which replaces ‘/’ with ‘_’ and ‘+’ with ‘-‘. Base64 uses ASCII characters which require just a byte each, but since each byte encodes six bits, the output is always 33% larger than the input. It’s a very simple two stage process to encode a sequence of bytes as text:

  1. For each three byte sequence shift each 6-bit subword onto 8-bit boundaries to produce four bytes
  2. For each of the four intermediate bytes, locate a character in a 64 element lookup table corresponding to the format being used.

It’s easy to implement:


    int i = 0;
    int j = 0;
    // 3 bytes in, 4 bytes out
    for (; j + 3 < out.length && i + 2 < in.length; i += 3, j += 4) {
      // map 3 bytes into the lower 24 bits of an integer
      int word = (in[i + 0] & 0xFF) << 16 | (in[i + 1] & 0xFF) << 8 | (in[i + 2] & 0xFF);
      // for each 6-bit subword, find the appropriate character in the lookup table
      out[j + 0] = ENCODING[(word >>> 18) & 0x3F];
      out[j + 1] = ENCODING[(word >>> 12) & 0x3F];
      out[j + 2] = ENCODING[(word >>> 6) & 0x3F];
      out[j + 3] = ENCODING[word & 0x3F];
    }

Since JDK8 there has been java.util.Base64.Encoder which benchmarks favourably against older library implementations. How does it work? The best way to find out is always JMH perfasm. Here’s the hottest part of the encoding loop for a byte[] (JDK10 on Ubuntu), annotated to explain what is going on.


                  ╭        0x00007f02993ad1e9: jge    0x00007f02993ad3ae  
                  │        0x00007f02993ad1ef: mov    %r13d,%edx
                  │        0x00007f02993ad1f2: xor    %r10d,%r10d
  0.01%    0.00%  │╭       0x00007f02993ad1f5: jmpq   0x00007f02993ad305
                  ││       0x00007f02993ad1fa: nopw   0x0(%rax,%rax,1)
  2.36%    2.08%  ││ ↗     0x00007f02993ad200: add    $0x4,%r10d         
  2.33%    2.11%  ││ │     0x00007f02993ad204: mov    %r8d,%esi          
  2.38%    2.17%  ││ │  ↗  0x00007f02993ad207: vmovq  %xmm0,%r11              ; overflow into FPU has occurred, move byte from SSE register just in time
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  2.34%    2.21%  ││ │  │  0x00007f02993ad20c: movzbl 0x10(%r11,%rsi,1),%r11d ; load the first input byte
  2.34%    2.21%  ││ │  │  0x00007f02993ad212: shl    $0x10,%r11d             ; shift the first input byte into the third byte of the intermediate dword (r11d)  
  2.43%    2.17%  ││ │  │  0x00007f02993ad216: mov    %esi,%r8d
  2.35%    2.34%  ││ │  │  0x00007f02993ad219: add    $0x3,%r8d          
  2.37%    2.20%  ││ │  │  0x00007f02993ad21d: movslq %esi,%rax
  2.36%    2.18%  ││ │  │  0x00007f02993ad220: mov    %r10d,%edi
  2.38%    2.69%  ││ │  │  0x00007f02993ad223: inc    %edi               
  2.38%    2.77%  ││ │  │  0x00007f02993ad225: vmovq  %xmm0,%rsi              ; floating point spill, move to rsi just in time to load the byte
  2.31%    2.71%  ││ │  │  0x00007f02993ad22a: movzbl 0x12(%rsi,%rax,1),%esi  ; load the third input byte 
  2.41%    2.77%  ││ │  │  0x00007f02993ad22f: vmovq  %xmm0,%rbp
  2.37%    2.26%  ││ │  │  0x00007f02993ad234: movzbl 0x11(%rbp,%rax,1),%eax  ; load the second input byte
  2.34%    2.19%  ││ │  │  0x00007f02993ad239: shl    $0x8,%eax               ; shift the second input byte into the second byte of r11d
  2.28%    2.17%  ││ │  │  0x00007f02993ad23c: or     %eax,%r11d              ; combine the second byte with the intermediate dword (disjoint union, byte 2 now in second byte of dword)
  2.39%    2.14%  ││ │  │  0x00007f02993ad23f: or     %esi,%r11d              ; combine the third byte with the intermediate word (disjoint union, byte 3 now in lower order bits of dword)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  2.28%    2.72%  ││ │  │  0x00007f02993ad242: mov    %r11d,%esi              ; encoding starts here, copies the intermediate word in esi to break dependency on r11d
  2.29%    2.67%  ││ │  │  0x00007f02993ad245: shr    $0x12,%esi              ; shift copy of dword right 18 (careful: 12 = $0xC :)) bits
  2.27%    2.78%  ││ │  │  0x00007f02993ad248: and    $0x3f,%esi              ; mask to get 6 bits (one position in lookup table computed and stored in esi now)   
  2.37%    2.79%  ││ │  │  0x00007f02993ad24b: movzwl 0x10(%r14,%rsi,2),%eax  ; load the character encoding the first byte
  2.54%    2.29%  ││ │  │  0x00007f02993ad251: cmp    %edx,%r10d
                  ││╭│  │  0x00007f02993ad254: jae    0x00007f02993ad469
  2.29%    2.22%  ││││  │  0x00007f02993ad25a: mov    %al,0x10(%r9,%r10,1)  
  2.43%    2.29%  ││││  │  0x00007f02993ad25f: mov    %r10d,%eax
  2.33%    2.16%  ││││  │  0x00007f02993ad262: add    $0x3,%eax
  2.34%    2.14%  ││││  │  0x00007f02993ad265: mov    %r11d,%esi
  2.37%    2.24%  ││││  │  0x00007f02993ad268: shr    $0xc,%esi               ; shift a copy of the dword right by 12 bits  
  2.36%    2.26%  ││││  │  0x00007f02993ad26b: and    $0x3f,%esi              ; mask it to get 6 bits
  2.26%    2.83%  ││││  │  0x00007f02993ad26e: movzwl 0x10(%r14,%rsi,2),%esi  ; load the character encoding for the second output byte
  2.37%    2.72%  ││││  │  0x00007f02993ad274: cmp    %edx,%eax
                  ││││  │  0x00007f02993ad276: jae    0x00007f02993ad4bd
  2.27%    2.80%  ││││  │  0x00007f02993ad27c: mov    %r11d,%edi              ; copy the intermediate word
  2.43%    2.81%  ││││  │  0x00007f02993ad27f: and    $0x3f,%edi              ; mask the lower 6 bits
  2.29%    2.12%  ││││  │  0x00007f02993ad282: movzwl 0x10(%r14,%rdi,2),%eax  ; load the character encoding for the fourth output byte
  2.36%    2.22%  ││││  │  0x00007f02993ad288: shr    $0x6,%r11d              ; shift the word 6 bits right
  2.31%    2.15%  ││││  │  0x00007f02993ad28c: and    $0x3f,%r11d             ; mask the lower 6 bits
  2.40%    2.26%  ││││  │  0x00007f02993ad290: movzwl 0x10(%r14,%r11,2),%r11d ; load the character encoding for third output byte 

The first thing to notice is there are several XMM registers used, but no vectorisation: these are floating point spills and their presence indicates register pressure. Hotspot stores variables in XMM registers to avoid storing the variable somewhere costlier to fetch from, but instructions for manipulating bytes and integers can’t take an XMM register as an operand, so the variable is always moved to an appropriate register just in time. Note that in the perfasm output, the target of the move from the XMM register is always used as an operand immediately after the move.

The middle section above is spent collecting the three input bytes into an integer, the time spent here is roughly one third of the total runtime of the method. Some of the section is executed in parallel: loading the input bytes is independent, but combining them is sequential. The intermediate integer is then copied several times before doing the encoding lookup, meaning the four lookups happen independently. Roughly 45% of the time is spent here.

Can this code be vectorised? Yes: Wojciech Mula has a blog post on this topic, and wrote a paper with Daniel Lemire. I read both of these references recently. Their approach is roughly ten times faster than their scalar baseline, which is virtually identical to the JDK implementation.

For AVX2 their algorithm starts by loading 32 bytes into a YMM register, but making sure that the 24 bytes of interest are loaded into the centre of the register, that is, the 4 bytes at either end are rubbish and will be ignored. This is achieved by loading at an offset of -4, which is quite problematic in a safe language like Java. The 24 bytes in the middle of the register are then permuted so some permutation (with duplication) of each 3 byte sequence is contained within a 4 byte lane.

load 24 bytes centred in a 256-bit register
|**-**-**-**|A1-A2-A3-B1|B2-B3-C1-C2|C3-D1-D2-D3|E1-E2-E3-F1|F2-F3-G1-G2|G3-H1-H2-H3|**-**-**-**|
permute the bytes of the LHS
|10-11- 9-10| 7- 8- 6- 7| 4- 5- 3- 4| 1- 2- 0- 1|
|**-**-**-**|A1-A2-A3-B1|B2-B3-C1-C2|C3-D1-D2-D3|E1-E2-E3-F1|F2-F3-G1-G2|G3-H1-H2-H3|**-**-**-**|
|A2-A3-A1-A2|B2-B3-B1-B2|C2-C3-C1-C2|D2-D3-D1-D2|E1-E2-E3-F1|F2-F3-G1-G2|G3-H1-H2-H3|**-**-**-**|
permute the bytes of the RHS 
                                                |14-15-13-14|11-12-10-11| 8- 9- 7- 8| 5- 6- 4- 5|
|A2-A1-A3-A2|B2-B1-B3-B2|C2-C1-C3-C2|D2-D1-D3-D2|E1-E2-E3-F1|F2-F3-G1-G2|G3-H1-H2-H3|**-**-**-**|
|A2-A3-A1-A2|B2-B3-B1-B2|C2-C3-C1-C2|D2-D3-D1-D2|E2-E3-E1-E2|F2-F3-F1-F2|G2-G3-G1-G2|H2-H3-H1-H2|

It looks like the ordering is wrong, and the second byte is always duplicated, but it’s intentional. Now the bytes are in the correct integer lanes, in order to use them as indexes into the encoding lookup table, the four 6-bit sequences need to be extracted by masking and shifting into the right positions.

Each 24-bit sequence has undergone the following transformation, where the letter indicates the 6-bit sequence, but the number indicates the bit number, with 8 bit lanes:

|         PREVIOUS      |           A1          |           A2          |          A3           |
|**-**-**-**-**-**-**-**|a8-a7-a6-a5-a4-a3-b2-b1|b8-b7-b6-b5-c4-c3-c2-c1|c8-c7-d6-d5-d4-d3-d2-d1|
|           A2          |           A3          |           A1          |          A2           |
|b8-b7-b6-b5-c4-c3-c2-c1|c8-c7-d6-d5-d4-d3-d2-d1|a8-a7-a6-a5-a4-a3-b2-b1|b8-b7-b6-b5-c4-c3-c2-c1|

Based on the input integer and its shuffled form above, the target structure of each integer prior to encoding is:

|**-**-**-**-**-**-**-**|a8-a7-a6-a5-a4-a3-b2-b1|b8-b7-b6-b5-c4-c3-c2-c1|c8-c7-d6-d5-d4-d3-d2-d1|
|b8-b7-b6-b5-c4-c3-c2-c1|c8-c7-d6-d5-d4-d3-d2-d1|a8-a7-a6-a5-a4-a3-b2-b1|b8-b7-b6-b5-c4-c3-c2-c1|
|00-00-d6-d5-d4-d3-d2-d1|00-00-c4-c3-c2-c1-c8-c7|00-00-b2-b1-b8-b7-b6-b5|00-00-a8-a7-a6-a5-a4-a3|

So, sequence c needs to move 6 bits right and sequence a must move right by 10 bits. AVX2 doesn’t make it possible to do both of these shifts at once. So the following operations could be performed:

mask a
| 0- 0- 0- 0- 0- 0- 0- 0| 0- 0- 0- 0- 0- 0- 0- 0| 1- 1- 1- 1- 1- 1- 0- 0| 0- 0- 0- 0- 0- 0- 0- 0|
|b8-b7-b6-b5-c4-c3-c2-c1|c8-c7-d6-d5-d4-d3-d2-d1|a8-a7-a6-a5-a4-a3-b2-b1|b8-b7-b6-b5-c4-c3-c2-c1|
|00-00-00-00-00-00-00-00|00-00-00-00-00-00-00-00|a8-a7-a6-a5-a4-a3-00-00|00-00-00-00-00-00-00-00|
shift right 10 bits
|00-00-00-00-00-00-00-00|00-00-00-00-00-00-00-00|00-00|00-00-00-00-00-00|00-00-a8-a7-a6-a5-a4-a3|
mask c
| 0- 0- 0- 0- 1- 1- 1- 1| 1- 1- 0- 0- 0- 0- 0- 0| 0- 0- 0- 0- 0- 0- 0- 0| 0- 0- 0- 0- 0- 0- 0- 0|
|b8-b7-b6-b5-c4-c3-c2-c1|c8-c7-d6-d5-d4-d3-d2-d1|a8-a7-a6-a5-a4-a3-b2-b1|b8-b7-b6-b5-c4-c3-c2-c1|
|00-00-00-00-c4-c3-c2-c1|c8-c7-00-00-00-00-00-00|00-00-00-00-00-00-00-00|00-00-00-00-00-00-00-00|
shift right 6 bits
|00-00-00-00-00-00-00-00|00-00-c4-c3-c2-c1-c8-c7|00-00-00-00-00-00-00-00|00-00-00-00-00-00-00-00|
union
|00-00-00-00-00-00-00-00|00-00-00-00-00-00-00-00|00-00|00-00-00-00-00-00|00-00-a8-a7-a6-a5-a4-a3|
|00-00-00-00-00-00-00-00|00-00-c4-c3-c2-c1-c8-c7|00-00-00-00-00-00-00-00|00-00-00-00-00-00-00-00|
|00-00-00-00-00-00-00-00|00-00-c4-c3-c2-c1-c8-c7|00-00-00-00-00-00-00-00|00-00-a8-a7-a6-a5-a4-a3|

However, that’s two masks, two shifts and a union, and needs several registers for temporary results. A single mask can be created by broadcasting 0x0fc0fc00 and two independent 16 bit shifts can be emulated in a single instruction with a special multiplication, using the semantic snowflake vpmulhuw, which does an unsigned 16-bit multiplication, storing the upper 16-bits.

Sequence b needs to shift left 4 bits, and sequence d needs to shift left 8 bits. Rather than use two separate masks and shifts, a single vpmullw, an unsigned multiplication outputting the lower 16 bits, achieves the shifts after masking with 0x003f03f0,. This result is united with the result of the first multiplication to get the correct output.

|00-00-d6-d5-d4-d3-d2-d1|00-00-c4-c3-c2-c1-c8-c7|00-00-b2-b1-b8-b7-b6-b5|00-00-a8-a7-a6-a5-a4-a3|

Now for the encoding itself! One approach would be to dump the content of the vector into a byte array and use scalar code to do the lookups just like the JDK implementation does, but this stage accounted for 45% of the execution time in the scalar implementation, so the encoding needs to use vector instructions too.

If base 64 used only characters in the range [0, 64), the lookup would be expressible as a closed permutation of the indices, albeit a permutation too large for AVX2. Performing the lookup as a permutation isn’t possible because several base 64 characters are larger than 64. However, the encoding character can be computed by adding an offset to the index, and there are only five distinct ranges. For each character within a contiguous range, the offset is the same, so there are only five possible offsets, so if a 3-bit value corresponding to an offset can be computed quickly from each 6-bit index, an offset vector can be looked up and simply added to the index vector. Then it’s just a question of finding and tuning a reduction to offset key.

This should be obvious from this table.

Position Character Decimal Offset Reduced Nibble
0 A 65 65 13
1 B 66 65 13
2 C 67 65 13
3 D 68 65 13
4 E 69 65 13
5 F 70 65 13
6 G 71 65 13
7 H 72 65 13
8 I 73 65 13
9 J 74 65 13
10 K 75 65 13
11 L 76 65 13
12 M 77 65 13
13 N 78 65 13
14 O 79 65 13
15 P 80 65 13
16 Q 81 65 13
17 R 82 65 13
18 S 83 65 13
19 T 84 65 13
20 U 85 65 13
21 V 86 65 13
22 W 87 65 13
23 X 88 65 13
24 Y 89 65 13
25 Z 90 65 13
26 a 97 71 0
27 b 98 71 0
28 c 99 71 0
29 d 100 71 0
30 e 101 71 0
31 f 102 71 0
32 g 103 71 0
33 h 104 71 0
34 i 105 71 0
35 j 106 71 0
36 k 107 71 0
37 l 108 71 0
38 m 109 71 0
39 n 110 71 0
40 o 111 71 0
41 p 112 71 0
42 q 113 71 0
43 r 114 71 0
44 s 115 71 0
45 t 116 71 0
46 u 117 71 0
47 v 118 71 0
48 w 119 71 0
49 x 120 71 0
50 y 121 71 0
51 z 122 71 0
52 0 48 -4 1
53 1 49 -4 2
54 2 50 -4 3
55 3 51 -4 4
56 4 52 -4 5
57 5 53 -4 6
58 6 54 -4 7
59 7 55 -4 8
60 8 56 -4 9
61 9 57 -4 10
62 + 43 -19 11
63 / 47 -16 12

The offset column is the value of the character minus the index, and the reduced nibble is a number computed by an efficient, if inelegant, sequence of vector operations that can be read about in the paper. Given that there are only five valid offsets, they could be specified by a three-bit value, and overspecified by a nibble, but a bit of redundancy allows a faster computation. The mapping is specified as follows: upper case letters to the number 13, lower case letters to zero, each number i to i + 1, and ‘+’ and ‘/’ to 11 and 12 respectively. These nibbles are then used as the input to a permutation using vpshufb to produce the appropriate offset to add to the index.

             
plus ------------------------------------------------------\         /------------------ forward slash
digits  -----------\***********************************\    \       /   /--------------- upper case
lower case -----\   \***********************************\    \     /   /   /***/-------- undefined 
Reduced Nibble [ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10,  11,  12, 13, 14, 15]
Offset         [71, -4, -4, -4, -4, -4, -4, -4, -4, -4, -4, -19, -16, 65,  0,  0]

A call to vpshufb with a vector of reduced nibbles and the permutation above as input produces an offset vector which is added to the indexes to encode a vector of 6-bit values.

Would it be possible to implement a permutation like this in the Vector API? I expect this will be too complex to be expressed precisely because it works around and therefore embraces vpshufb performing two independent 128-bit permutations, rather than a single 256-bit permutation. This could be achieved with two SSE 128-bit loads and permutes, but loading 256-bit vectors from pairs of 128-bit vectors is convoluted as things stand.

For the extraction step, I doubt the semantics of _mm256_mulhi_epi16 or _mm256_mullo_epi16 will ever be exposed to Java programmers, but it is possible to take the slow path and perform independent shifts. It just so happens that the calculation of the offset key relies on unsigned 8-bit arithmetic which does not exist in Java, but there are simpler but slower techniques to calculate the offset key. AVX2 is weird, with an abundance of unexpected capabilities alongside screaming feature gaps, and all the AVX2 code I read is teeming with ingenious hacks. I can imagine language designers being reticent to enshrine these peculiarities in a higher level language.

The real question is why bother? The fact that we use text so often moves the goalposts from copying memory to all of the complexity above. The fastest base 64 encoding is the one you don’t do.

I highly recommend reading Wojciech Mula’s blog if you are interested in what’s possible with vectorisation.

John Rose wrote an interesting response to the paragraphs about the Vector API in this post here.

Collaborative Compression

I have recently become interested in the way the effects of compression algorithms and text encoding compose. I started looking at this in my last post. Base 64 encoding extracts and maps each 6-bit subword of a byte stream to one of 64 possible bytes, which is guaranteed to waste 2 bits per byte, but can encode any binary data as UTF-8 text. On a block of incompressible binary data encoded as base 64, neither LZ4 nor Snappy can compress the text to the size of the original binary data, whereas GZIP can (undo a 33% inflation). With monotonically increasing integers, LZ4 and Snappy achieve size parity with uncompressed binary data, whereas GZIP compression can be less effective on base 64 encoded text than on equivalent binary data.

I was interested to see if using LZ4 or Snappy as an intermediate step between base 64 and GZIP would make a difference. Compressing monotonic integers again, my expectation was that LZ4/Snappy could “undo” the base 64 bloat, to get to parity in composition with GZIP on raw binary data, but that’s not what happened:

Compression Transformation Count Compressed Size (MB)
Uncompressed binary 1000000 976.56
Uncompressed base64 1000000 1304.63
Uncompressed base64/snappy 1000000 969.20
Uncompressed base64/lz4 1000000 993.98
GZIP binary 1000000 337.67
GZIP base64 1000000 523.80
GZIP base64/snappy 1000000 61.60
GZIP base64/lz4 1000000 31.53

I had already noted that all three algorithms had poor compression ratios for this case, which is better suited to delta encoding. Applying base 64 encoding prior to a fast compression algorithm seems to prime the input for GZIP. I’m so confused by this result that I spent some time on sanity checks: verifying that the results are truly decompressible.

This isn’t very exciting: the result is not competitive with delta encoding, and I ran the same test with a sinusoidal byte stream and saw the opposite effect.

Compression Transformation Count Compressed Size (MB)
Uncompressed binary 1000000 976.56
Uncompressed base64 1000000 1304.63
Uncompressed base64/snappy 1000000 1087.73
Uncompressed base64/lz4 1000000 1089.67
GZIP binary 1000000 27.74
GZIP base64 1000000 69.12
GZIP base64/snappy 1000000 282.52
GZIP base64/lz4 1000000 283.81

Still, it’s hard to predict how multiple layers of data representation will interact.

Obfuscated Compressibility

In any real world system there are often multiple layers of encoding and compression applied to data; a base 64 encoded image in an HTML file may be gzipped for transport; a snappy compressed byte array in a datastore might be base 64 encoded in a JSON message. Encoding and lossless compression are invertible transformations, and an invertible transformation must neither create nor destroy information. Yet, as can be seen in the adversarial case of UUIDs, various textual encodings prevent compression algorithms from reaching their potential (that is, the information content of the data).

Base 64 encoding translates arbitrary binary data to valid UTF-8 text, and is used for representing binary data in JSON messages, images embedded in HTML, and various tokens used for authentication and authorisation. Base 64 maps each 6-bit subword of a byte stream to one of 64 valid bytes, so requires four bytes to encode three input bytes: a 33% overhead.

What would happen if we base 64 encode incompressible binary data (i.e. already compressed, or an encryption key or similar) in JSON, and then apply transport level compression to the JSON? Can any compression algorithm undo the 33% inflation? Of GZIP, LZ4 and Snappy, only GZIP is capable of doing so (albeit extremely slowly). The transformation is opaque to the much faster LZ4 and Snappy algorithms.

Compression Encoding Count Compressed Size (MB)
Uncompressed binary 1000000 976.56
Uncompressed base64 1000000 1304.63
GZIP binary 1000000 976.86
GZIP base64 1000000 988.74
LZ4 binary 1000000 980.39
LZ4 base64 1000000 1309.74
Snappy binary 1000000 976.61
Snappy base64 1000000 1304.69

Writing monotonically increasing numbers into a byte[] prior to base 64 encoding tells another story. GZIP, as usual, takes a very long time to achieve a reasonable compression ratio, but the ratio depends on the encoding. LZ4 and Snappy are insensitive to base 64 encoding but they can’t compress beyond the size of the original data.

Compression Encoding Count Compressed Size (MB)
Uncompressed binary 1000000 976.56
Uncompressed base64 1000000 1304.63
GZIP binary 1000000 337.67
GZIP base64 1000000 523.80
LZ4 binary 1000000 980.39
LZ4 base64 1000000 987.99
Snappy binary 1000000 976.61
Snappy base64 1000000 948.17

There is only a single bit difference between each subsequent four byte sequence in this data: it could be abbreviated by storing the first value, and the difference of the difference between each four byte sequence, and the position of each inflection point. This particular data could take up less than 100 bits, but in order to exploit it, we would have to be expecting this pattern and hope it hasn’t been scrambled base 64.

Text encoding is a good way to confound a compression algorithm, but how do compression algorithms compose? This isn’t that odd a question: imagine you store a compressed BLOB in some kind of datastore, and provide an HTTP interface to that datastore. Maybe even without realising it, it’s likely that the compression algorithm used for the BLOBs will be composed with GZIP for the transport, and if the interchange format is JSON, there will be a layer of base 64 encoding involved too.

For monotonic integers, neither LZ4 nor snappy compress the data, but at least they don’t get in GZIP’s way.

Compression Pre-Compression Count Compressed Size (MB)
Uncompressed Uncompressed 1000000 976.56
Uncompressed Snappy 1000000 981.33
Uncompressed LZ4 1000000 981.33
LZ4 Uncompressed 1000000 980.39
LZ4 Snappy 1000000 981.32
LZ4 LZ4 1000000 981.32
Snappy Uncompressed 1000000 976.61
Snappy Snappy 1000000 981.37
Snappy LZ4 1000000 981.37
GZIP Uncompressed 1000000 337.67
GZIP Snappy 1000000 339.95
GZIP LZ4 1000000 339.95

It’s worth stressing that both LZ4 and Snappy can compress some data very well, they just aren’t well suited to monotonic integers. Both algorithms are much faster than GZIP, prioritising speed over compression ratios. I haven’t measured it, but the GZIP code used in these benchmarks is too slow to be practical, and its compression ratio is still a long way from optimal.

I often see decisions to use a compression algorithm taken rather arbitrarily, without due consideration of the data being compressed. This is often driven by the quest for modularity. Modularity entails composition, which can occasionally be toxic, and often requires that we forget aspects of domain knowledge that could be exploited to great effect.

My code for this blog post is at github.

UUIDs and Compressibility

Universally unique identifiers, or UUIDs, are often used for database primary keys in scenarios where coordination of persistence is either impossible or impractical. UUIDs offer very good probabilistic guarantees of collision avoidance, at the cost of 128 bits per key. 128 bits for a key is quite problematic in key scans and joins: with appropriately structured data, these algorithms can benefit significantly from vector processing, but at 128 bits per element, vectorisation is probably a lost cause. The 128 bit cost is also a fixed cost, even if your database has fewer than one quintilliard rows. By virtue of being random enough to avoid collisions when generated in a distributed system, there is no way to compress UUID keys to the number of bits required to identify each record in the dataset. Data engineering is about tradeoffs, and none of this is to say UUIDs should never be used, but it is prudent to be aware of the costs. All of this applies in the best case: assuming the keys are stored in their binary format! How bad can it get if UUIDs are stored as text? Can compression undo the damage?

If you work with a relational database like Postgres, you can use an implementation specific uuid type to ensure UUIDs take up as little space as possible. However, having worked on several projects using NoSQL databases, I have seen people store UUIDs as text on at least two occasions (though this is not the fault of the databases!). How harmful this is depends on the character encoding used, but UTF-8 is quite common (for the characters found in a UUID, this is equivalent to ISO-8859-1). A UUID represented by a string like “9289f568-c33f-4667-8a3d-092aa4e21262” can take up the following sizes, depending on the encoding used.

Format Size (bytes) Ratio
binary 16 1
ISO-8859-1 36 2.25
UTF-8 36 2.25
UTF-16 74 4.625

The real issue here is not so much the extra storage burden, because keys are usually much smaller than the values, but that the keys are used to process queries. A representation requiring 2-5x more space requires 2-5x more data to pass through the processor when evaluating queries. Many NoSQL databases offer succinct compression options for keys, which allow the keys to be processed without decompression at a small computational cost, such as the prefix and delta encoding available in HBase. This approach can work wonders with well structured keys, but this will do absolutely nothing if your keys are random.

Even heavyweight compression techniques requiring global or block level decompression before evaluation can’t recover the bloat in a textual representation of a UUID because the text is almost as random as the bytes. I compressed collections of 1M UUIDs using compression algorithms typically reserved for “cold data”: gzip, snappy and LZ4, using the code on github.

Compression Encoding Count Size (MB)
Uncompressed binary 1000000 15.26
Uncompressed ISO-8859-1 1000000 34.33
Uncompressed UTF-16 1000000 70.57
GZIP binary 1000000 15.26
GZIP ISO-8859-1 1000000 19.50
GZIP UTF-16 1000000 23.73
LZ4 binary 1000000 15.32
LZ4 ISO-8859-1 1000000 32.56
LZ4 UTF-16 1000000 50.16
Snappy binary 1000000 15.26
Snappy ISO-8859-1 1000000 33.99
Snappy UTF-16 1000000 37.97

Assuming you are OK with treating your keys as cold data, none of these algorithms will undo the inflation. What’s interesting, assuming you’ve never thought about it before, is that none of these algorithms can compress the binary representation of the UUIDs. This is because the UUIDs are random, and random enough to be taken as unique in any given trillion year epoch. Even though there are only one million values, which could be represented by 20 bits per value, none of the compression algorithms improves on 128 bits per value. This reminds me of a passage from Theories of Everything by John D. Barrow:

The goal of science is to make sense of the diversity of nature. [Science] employs observation to gather information about the world and to test predictions about how the world will react to new circumstances, but in between these two procedures lies the heart of the scientific process. This is nothing more than the transformation of lists of observational data into abbreviated form by the recognition of patterns. The recognition of such a pattern allows the information content of the observed sequence of events to be replaced by a shorthand formula which possesses the same, or almost the same, information content…

We can extend this image of science in a manner that sharpens its focus. Suppose we are presented with any string of symbols. They do not have to be numbers but let us assume for the sake of illustration that they are. We say that the string is ‘random’ if there is no other representation of the string which is shorter than itself. But we say it is ‘non-random’ if there does exist an abbreviated representation.

Random data can’t be compressed, and inflated textual representations of random bits are almost as random as the bits themselves, as far as any existing compression algorithm is concerned. Efficient representation of data requires the identification and exploitation of patterns, and using UUIDs instead of naturally occurring composite keys introduces randomness where there could often be order.

Limiting Factors in a Dot Product Calculation

The dot product is ubiquitous in computing. The simple calculation is used heavily in neural networks for perceptron activation and again in gradient descent algorithms for error backpropagation. It is one of the building blocks of linear regression. Cosine similarity in document search is yet another dot product.

These use cases are more often implemented in C/C++ than in JVM languages, for reasons of efficiency, but what are the constraints on its computational performance? The combination of the computational simplicity and its streaming nature means the limiting factor in efficient code should be memory bandwidth. This is a good opportunity to look at the raw performance that will be made available with the vector API when it’s released.

Written in Java code since Java 9, the code to calculate a dot product is easy, using the Math.fma intrinsic.

  public float vanilla() {
    float sum = 0f;
    for (int i = 0; i < size; ++i) {
      sum = Math.fma(left[i], right[i], sum);
    }
    return sum;
  }

Despite its simplicity, this code is incredibly inefficient precisely because it’s written in Java. Java is a language which prizes portability, and this sometimes comes at the cost of performance. The only way to make this routine produce the same number given the same input, no matter what operating system or instruction sets are available, is to do the operations in the same order, which means no unrolling or vectorisation. For a web application, this a good trade off, but for data analytics it is not.

An estimate of intensity, assuming a constant processor frequency, and two floating point operations (flops) per FMA, shows that the intensity is constant but very low at 0.67 flops/cycle. There being constant intensity as a function of array size is interesting because it indicates that the performance is insensitive to cache hierarchy, that the the limit is the CPU. Daniel Lemire made this observation with a benchmark written in C, disabling fastmath compiler optimisations, recently.

The JLS’s view on floating point arithmetic is the true limiting factor here. Assuming you really care about dot product performance, the best you can do to opt out is to unroll the loop and get slightly higher throughput.

  public float unrolled() {
    float s0 = 0f;
    float s1 = 0f;
    float s2 = 0f;
    float s3 = 0f;
    float s4 = 0f;
    float s5 = 0f;
    float s6 = 0f;
    float s7 = 0f;
    for (int i = 0; i < size; i += 8) {
      s0 = Math.fma(left[i + 0],  right[i + 0], s0);
      s1 = Math.fma(left[i + 1],  right[i + 1], s1);
      s2 = Math.fma(left[i + 2],  right[i + 2], s2);
      s3 = Math.fma(left[i + 3],  right[i + 3], s3);
      s4 = Math.fma(left[i + 4],  right[i + 4], s4);
      s5 = Math.fma(left[i + 5],  right[i + 5], s5);
      s6 = Math.fma(left[i + 6],  right[i + 6], s6);
      s7 = Math.fma(left[i + 7],  right[i + 7], s7);
    }
    return s0 + s1 + s2 + s3 + s4 + s5 + s6 + s7;
  }

The intensity is about 4x better, but still constant. My Intel Skylake processor is capable of 32 flops/cycle, so this code is clearly still not very efficient, but it’s actually the best you can do with any released version of OpenJDK at the time of writing.

The Vector API

I have been keeping an eye on the Vector API incubating in Project Panama for some time, and have only recently got round to kicking the tires. I wrote some benchmarks earlier in the year but ran into, as one should expect of a project in active development, bugs in FMA and vector box elimination. This limited the value I would get from writing about the benchmarks. These bugs have been fixed for a long time now, and you can start to see the how good this API is going to be.

Here’s a simple implementation which wouldn’t be legal for C2 (or Graal for that matter) to generate from the dot product loop. It relies on an accumulator vector, into which a vector dot product of the next eight elements is FMA’d for each step of the loop.

  public float vector() {
    var sum = YMM_FLOAT.zero();
    for (int i = 0; i < size; i += YMM_FLOAT.length()) {
      var l = YMM_FLOAT.fromArray(left, i);
      var r = YMM_FLOAT.fromArray(right, i);
      sum = l.fma(r, sum);
    }
    return sum.addAll();
  }

This loop can be unrolled, but it seems that this must be done manually for the sake of stability. The unroll below uses four accumulators and results in a huge boost in throughput.

  private float vectorUnrolled() {
    var sum1 = YMM_FLOAT.zero();
    var sum2 = YMM_FLOAT.zero();
    var sum3 = YMM_FLOAT.zero();
    var sum4 = YMM_FLOAT.zero();
    int width = YMM_FLOAT.length();
    for (int i = 0; i < size; i += width * 4) {
      sum1 = YMM_FLOAT.fromArray(left, i).fma(YMM_FLOAT.fromArray(right, i), sum1);
      sum2 = YMM_FLOAT.fromArray(left, i + width).fma(YMM_FLOAT.fromArray(right, i + width), sum2);
      sum3 = YMM_FLOAT.fromArray(left, i + width * 2).fma(YMM_FLOAT.fromArray(right, i + width * 2), sum3);
      sum4 = YMM_FLOAT.fromArray(left, i + width * 3).fma(YMM_FLOAT.fromArray(right, i + width * 3), sum4);
    }
    return sum1.addAll() + sum2.addAll() + sum3.addAll() + sum4.addAll();
  }

This plot doesn’t quite do justice to how large the difference is and you could be forgiven for thinking the performance converges. In fact, presenting the data like this is a great way to mislead people! The absolute difference narrows, but the relative performance is more or less constant. Looking at intensity gives a much better picture and is size invariant (until memory bandwidth is saturated).

The first thing to notice is that the intensity gets nowhere near 32 flops/cycle, and that’s because my chip can’t load data fast enough to keep the two FMA ports busy. Skylake chips can do two loads per cycle, which is enough for one FMA between two vectors and the accumulator. Since the arrays are effectively streamed, there is no chance to reuse any loads, so the absolute maximum intensity is 50% capacity, or just 16 flops/cycle.

In the unrolled vector code, the intensity hits 12 flops/cycle just before 4096 elements. 4096 is a special number because 2 * 4096 * 4 = 32kB is the capacity of L1 cache. This peak and rapid decrease suggests that the code is fast enough to be hitting memory bandwidth: if L1 were larger or L2 were faster, the intensity could be sustained. This is great, and the performance counters available with -prof perfnorm corroborate.

In the vanilla loop and unrolled loop, the cycles per instruction (CPI) reaches a maximum long before the arrays breach L1 cache. The latency of an instruction depends on where its operands come from, increasing the further away from L1 cache the data comes from. If CPI for arrays either side of the magical 4096 element threshold is the same, then memory cannot be the limiting factor. The unrolled vector loop show a very sharp increase, suggesting a strong dependency on load speed. Similarly, L1-dcache-load-misses can be seen to increase sharply once the arrays are no longer L1 resident (predictably) correlated with a drop in intensity only in the vector unrolled implementation. It’s short lived, but the unrolled vector code, albeit with bounds checks disabled, is efficient enough for the CPU not to be the bottleneck.

Take a look at my benchmarks and raw data.. The JDK used was built from the vectorIntrinsics branch of the Project Panama OpenJDK fork, run with JMH 1.20 on Ubuntu 16.04 LTS, on a 4 core i7-6700HQ processor.