Collaborative Compression

I have recently become interested in the way the effects of compression algorithms and text encoding compose. I started looking at this in my last post. Base 64 encoding extracts and maps each 6-bit subword of a byte stream to one of 64 possible bytes, which is guaranteed to waste 2 bits per byte, but can encode any binary data as UTF-8 text. On a block of incompressible binary data encoded as base 64, neither LZ4 nor Snappy can compress the text to the size of the original binary data, whereas GZIP can (undo a 33% inflation). With monotonically increasing integers, LZ4 and Snappy achieve size parity with uncompressed binary data, whereas GZIP compression can be less effective on base 64 encoded text than on equivalent binary data.

I was interested to see if using LZ4 or Snappy as an intermediate step between base 64 and GZIP would make a difference. Compressing monotonic integers again, my expectation was that LZ4/Snappy could “undo” the base 64 bloat, to get to parity in composition with GZIP on raw binary data, but that’s not what happened:

Compression Transformation Count Compressed Size (MB)
Uncompressed binary 1000000 976.56
Uncompressed base64 1000000 1304.63
Uncompressed base64/snappy 1000000 969.20
Uncompressed base64/lz4 1000000 993.98
GZIP binary 1000000 337.67
GZIP base64 1000000 523.80
GZIP base64/snappy 1000000 61.60
GZIP base64/lz4 1000000 31.53

I had already noted that all three algorithms had poor compression ratios for this case, which is better suited to delta encoding. Applying base 64 encoding prior to a fast compression algorithm seems to prime the input for GZIP. I’m so confused by this result that I spent some time on sanity checks: verifying that the results are truly decompressible.

This isn’t very exciting: the result is not competitive with delta encoding, and I ran the same test with a sinusoidal byte stream and saw the opposite effect.

Compression Transformation Count Compressed Size (MB)
Uncompressed binary 1000000 976.56
Uncompressed base64 1000000 1304.63
Uncompressed base64/snappy 1000000 1087.73
Uncompressed base64/lz4 1000000 1089.67
GZIP binary 1000000 27.74
GZIP base64 1000000 69.12
GZIP base64/snappy 1000000 282.52
GZIP base64/lz4 1000000 283.81

Still, it’s hard to predict how multiple layers of data representation will interact.

Obfuscated Compressibility

In any real world system there are often multiple layers of encoding and compression applied to data; a base 64 encoded image in an HTML file may be gzipped for transport; a snappy compressed byte array in a datastore might be base 64 encoded in a JSON message. Encoding and lossless compression are invertible transformations, and an invertible transformation must neither create nor destroy information. Yet, as can be seen in the adversarial case of UUIDs, various textual encodings prevent compression algorithms from reaching their potential (that is, the information content of the data).

Base 64 encoding translates arbitrary binary data to valid UTF-8 text, and is used for representing binary data in JSON messages, images embedded in HTML, and various tokens used for authentication and authorisation. Base 64 maps each 6-bit subword of a byte stream to one of 64 valid bytes, so requires four bytes to encode three input bytes: a 33% overhead.

What would happen if we base 64 encode incompressible binary data (i.e. already compressed, or an encryption key or similar) in JSON, and then apply transport level compression to the JSON? Can any compression algorithm undo the 33% inflation? Of GZIP, LZ4 and Snappy, only GZIP is capable of doing so (albeit extremely slowly). The transformation is opaque to the much faster LZ4 and Snappy algorithms.

Compression Encoding Count Compressed Size (MB)
Uncompressed binary 1000000 976.56
Uncompressed base64 1000000 1304.63
GZIP binary 1000000 976.86
GZIP base64 1000000 988.74
LZ4 binary 1000000 980.39
LZ4 base64 1000000 1309.74
Snappy binary 1000000 976.61
Snappy base64 1000000 1304.69

Writing monotonically increasing numbers into a byte[] prior to base 64 encoding tells another story. GZIP, as usual, takes a very long time to achieve a reasonable compression ratio, but the ratio depends on the encoding. LZ4 and Snappy are insensitive to base 64 encoding but they can’t compress beyond the size of the original data.

Compression Encoding Count Compressed Size (MB)
Uncompressed binary 1000000 976.56
Uncompressed base64 1000000 1304.63
GZIP binary 1000000 337.67
GZIP base64 1000000 523.80
LZ4 binary 1000000 980.39
LZ4 base64 1000000 987.99
Snappy binary 1000000 976.61
Snappy base64 1000000 948.17

There is only a single bit difference between each subsequent four byte sequence in this data: it could be abbreviated by storing the first value, and the difference of the difference between each four byte sequence, and the position of each inflection point. This particular data could take up less than 100 bits, but in order to exploit it, we would have to be expecting this pattern and hope it hasn’t been scrambled base 64.

Text encoding is a good way to confound a compression algorithm, but how do compression algorithms compose? This isn’t that odd a question: imagine you store a compressed BLOB in some kind of datastore, and provide an HTTP interface to that datastore. Maybe even without realising it, it’s likely that the compression algorithm used for the BLOBs will be composed with GZIP for the transport, and if the interchange format is JSON, there will be a layer of base 64 encoding involved too.

For monotonic integers, neither LZ4 nor snappy compress the data, but at least they don’t get in GZIP’s way.

Compression Pre-Compression Count Compressed Size (MB)
Uncompressed Uncompressed 1000000 976.56
Uncompressed Snappy 1000000 981.33
Uncompressed LZ4 1000000 981.33
LZ4 Uncompressed 1000000 980.39
LZ4 Snappy 1000000 981.32
LZ4 LZ4 1000000 981.32
Snappy Uncompressed 1000000 976.61
Snappy Snappy 1000000 981.37
Snappy LZ4 1000000 981.37
GZIP Uncompressed 1000000 337.67
GZIP Snappy 1000000 339.95
GZIP LZ4 1000000 339.95

It’s worth stressing that both LZ4 and Snappy can compress some data very well, they just aren’t well suited to monotonic integers. Both algorithms are much faster than GZIP, prioritising speed over compression ratios. I haven’t measured it, but the GZIP code used in these benchmarks is too slow to be practical, and its compression ratio is still a long way from optimal.

I often see decisions to use a compression algorithm taken rather arbitrarily, without due consideration of the data being compressed. This is often driven by the quest for modularity. Modularity entails composition, which can occasionally be toxic, and often requires that we forget aspects of domain knowledge that could be exploited to great effect.

My code for this blog post is at github.

UUIDs and Compressibility

Universally unique identifiers, or UUIDs, are often used for database primary keys in scenarios where coordination of persistence is either impossible or impractical. UUIDs offer very good probabilistic guarantees of collision avoidance, at the cost of 128 bits per key. 128 bits for a key is quite problematic in key scans and joins: with appropriately structured data, these algorithms can benefit significantly from vector processing, but at 128 bits per element, vectorisation is probably a lost cause. The 128 bit cost is also a fixed cost, even if your database has fewer than one quintilliard rows. By virtue of being random enough to avoid collisions when generated in a distributed system, there is no way to compress UUID keys to the number of bits required to identify each record in the dataset. Data engineering is about tradeoffs, and none of this is to say UUIDs should never be used, but it is prudent to be aware of the costs. All of this applies in the best case: assuming the keys are stored in their binary format! How bad can it get if UUIDs are stored as text? Can compression undo the damage?

If you work with a relational database like Postgres, you can use an implementation specific uuid type to ensure UUIDs take up as little space as possible. However, having worked on several projects using NoSQL databases, I have seen people store UUIDs as text on at least two occasions (though this is not the fault of the databases!). How harmful this is depends on the character encoding used, but UTF-8 is quite common (for the characters found in a UUID, this is equivalent to ISO-8859-1). A UUID represented by a string like “9289f568-c33f-4667-8a3d-092aa4e21262” can take up the following sizes, depending on the encoding used.

Format Size (bytes) Ratio
binary 16 1
ISO-8859-1 36 2.25
UTF-8 36 2.25
UTF-16 74 4.625

The real issue here is not so much the extra storage burden, because keys are usually much smaller than the values, but that the keys are used to process queries. A representation requiring 2-5x more space requires 2-5x more data to pass through the processor when evaluating queries. Many NoSQL databases offer succinct compression options for keys, which allow the keys to be processed without decompression at a small computational cost, such as the prefix and delta encoding available in HBase. This approach can work wonders with well structured keys, but this will do absolutely nothing if your keys are random.

Even heavyweight compression techniques requiring global or block level decompression before evaluation can’t recover the bloat in a textual representation of a UUID because the text is almost as random as the bytes. I compressed collections of 1M UUIDs using compression algorithms typically reserved for “cold data”: gzip, snappy and LZ4, using the code on github.

Compression Encoding Count Size (MB)
Uncompressed binary 1000000 15.26
Uncompressed ISO-8859-1 1000000 34.33
Uncompressed UTF-16 1000000 70.57
GZIP binary 1000000 15.26
GZIP ISO-8859-1 1000000 19.50
GZIP UTF-16 1000000 23.73
LZ4 binary 1000000 15.32
LZ4 ISO-8859-1 1000000 32.56
LZ4 UTF-16 1000000 50.16
Snappy binary 1000000 15.26
Snappy ISO-8859-1 1000000 33.99
Snappy UTF-16 1000000 37.97

Assuming you are OK with treating your keys as cold data, none of these algorithms will undo the inflation. What’s interesting, assuming you’ve never thought about it before, is that none of these algorithms can compress the binary representation of the UUIDs. This is because the UUIDs are random, and random enough to be taken as unique in any given trillion year epoch. Even though there are only one million values, which could be represented by 20 bits per value, none of the compression algorithms improves on 128 bits per value. This reminds me of a passage from Theories of Everything by John D. Barrow:

The goal of science is to make sense of the diversity of nature. [Science] employs observation to gather information about the world and to test predictions about how the world will react to new circumstances, but in between these two procedures lies the heart of the scientific process. This is nothing more than the transformation of lists of observational data into abbreviated form by the recognition of patterns. The recognition of such a pattern allows the information content of the observed sequence of events to be replaced by a shorthand formula which possesses the same, or almost the same, information content…

We can extend this image of science in a manner that sharpens its focus. Suppose we are presented with any string of symbols. They do not have to be numbers but let us assume for the sake of illustration that they are. We say that the string is ‘random’ if there is no other representation of the string which is shorter than itself. But we say it is ‘non-random’ if there does exist an abbreviated representation.

Random data can’t be compressed, and inflated textual representations of random bits are almost as random as the bits themselves, as far as any existing compression algorithm is concerned. Efficient representation of data requires the identification and exploitation of patterns, and using UUIDs instead of naturally occurring composite keys introduces randomness where there could often be order.