# Population Count in Java

How do you count the bits in a 32 bit integer? Since this is possible in a single instruction, popcntd, which is exposed by an intrinsic method in Java and several other languages, this is a completely academic question. Nevertheless, however futile, deriving an efficient expression is instructive.

A naive approach would be to check each of the 32 bits in sequence. This can be written in Java as follows:


public static int populationCountCheckEachBit(int value) {
int count = 0;
for (int i = 0; i < Integer.SIZE; ++i) {
if ((value & (1 << i)) != 0) {
++count;
}
}
return count;
}


This has constant and high execution time, even when most of the bits are unset: there will always be 32 left shifts and 32 intersections. There is no inherent data dependency in the loop above so it can probably be unrolled and pipelined, even so, it’s just too long to be practically useful. A less naive approach is to skip over the unset bits, which will actually be quite fast when the data is sparse.


public static int populationCountSkipUnsetBits(int value) {
int count = 0;
while (value != 0) {
value ^= value & -value;
++count;
}
return count;
}


The code above calculates the lowest bit and unsets it until there are no bits left. In other languages, resetting the bit can use the blsr instruction, but C2 would emit code using blsi instruction and an xor here. This code will do well for sparse data, but has a data dependency and the performance will be absolutely terrible for dense data (such as small negative numbers).

Since an integer’s population count is the sum of the population counts of its constituent bytes, and the population count of a byte can only take 256 values, why not precompute a small lookup table containing the population counts for each possible byte? Then, with four masks, three right shifts, four moves and three additions, the population count can be calculated.


private static int[] LOOKUP = {
0, 1, 1, 2, 1, 2, 2, 3,
1, 2, 2, 3, 2, 3, 3, 4,
1, 2, 2, 3, 2, 3, 3, 4,
2, 3, 3, 4, 3, 4, 4, 5,
1, 2, 2, 3, 2, 3, 3, 4,
2, 3, 3, 4, 3, 4, 4, 5,
2, 3, 3, 4, 3, 4, 4, 5,
3, 4, 4, 5, 4, 5, 5, 6,
1, 2, 2, 3, 2, 3, 3, 4,
2, 3, 3, 4, 3, 4, 4, 5,
2, 3, 3, 4, 3, 4, 4, 5,
3, 4, 4, 5, 4, 5, 5, 6,
2, 3, 3, 4, 3, 4, 4, 5,
3, 4, 4, 5, 4, 5, 5, 6,
3, 4, 4, 5, 4, 5, 5, 6,
4, 5, 5, 6, 5, 6, 6, 7,
1, 2, 2, 3, 2, 3, 3, 4,
2, 3, 3, 4, 3, 4, 4, 5,
2, 3, 3, 4, 3, 4, 4, 5,
3, 4, 4, 5, 4, 5, 5, 6,
2, 3, 3, 4, 3, 4, 4, 5,
3, 4, 4, 5, 4, 5, 5, 6,
3, 4, 4, 5, 4, 5, 5, 6,
4, 5, 5, 6, 5, 6, 6, 7,
2, 3, 3, 4, 3, 4, 4, 5,
3, 4, 4, 5, 4, 5, 5, 6,
3, 4, 4, 5, 4, 5, 5, 6,
4, 5, 5, 6, 5, 6, 6, 7,
3, 4, 4, 5, 4, 5, 5, 6,
4, 5, 5, 6, 5, 6, 6, 7,
4, 5, 5, 6, 5, 6, 6, 7,
5, 6, 6, 7, 6, 7, 7, 8
};

public static int populationCountWithLookupTable(int value) {
return LOOKUP[value & 0xFF]
+ LOOKUP[(value & 0xFF00) >>> 8]
+ LOOKUP[(value & 0xFF0000) >>> 16]
+ LOOKUP[(value & 0xFF000000) >>> 24];
}


This isn’t as stupid as it looks. The number of instructions is low and they can be pipelined easily. C2 obviously can’t autovectorise this, but I imagine this could possibly end up being quite fast (if used in a loop) once the Vector API becomes a reality. Lemire and Muła devised a fast vectorised population count algorithm based on a lookup table of precalculated population counts for each nibble. Their algorithm is used by clang to calculate the population count of an array, but is far beyond both the scope of this post and the capabilities of Java.

We can avoid storing the table while using very few instructions with a divide and conquer approach, writing the result in place. The first thing to notice is that the population count of N bits can be expressed in at most N bits. So, interpreting the integer as a 16 element string of 2-bit nibbles we can calculate each 2-bit population count and store it in the same 2 bit nibble.

The masks 0x55555555 and 0xAAAAAAAA each have alternating bits and are logical complements. Remember that the population count is the sum of the population counts of the even bits and the odd bits. The code below calculates the number of bits in each 2-bit nibble and stores the result into the same 2-bit nibble. It works because the addition can only carry left into a zero bit (the odd bits have all been shifted right).


int output = (value & 0x55555555) // mask the even bits
+ ((value & 0xAAAAAAAA) >>> 1); // mask the odd bits and shift right so they line up with the even bits


By way of example, consider the input value 0b11001010101101010101010101010011. The population count is 17, and the output takes the value 0b10000101011001010101010101010010. Notice that no 2-bit nibble takes the value 0b11 – we have 16 values of either zero, one or two: 2 + 0 + 1 + 1 + 1 + 2 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 0 + 2 = 17. It’s not necessary to have two separate constants: (value & 0xAAAAAAAA) >>> 1 is equivalent to (value >>> 1) & 0x55555555. This saves a register.

We now have a smaller problem: we need to add up the 16 2-bit nibbles. The mask 0x33333333 covers all the even 2-bit nibbles, and the mask 0xCCCCCCCC covers all the odd 2-bit nibbles. Shifting the odd values right and adding them to the even ones gives eight nibbles consisting of the 4-bit population counts:


value = (value & 0x55555555) + ((value >>> 1) & 0x55555555);
value = (value & 0x33333333) + ((value >>> 2) & 0x33333333);


Like before, the expression (value & 0xCCCCCCCC) >>> 2 has been replaced by (value >>> 2) & 0x33333333 to save a constant. Now we have eight nibbles to add up into four bytes, after that we have two shorts, and finally a single integer. The complete method ends up as follows:


public static int populationCountWithMasks(int value) {
value = (value & 0x55555555) + ((value >>> 1) & 0x55555555);
value = (value & 0x33333333) + ((value >>> 2) & 0x33333333);
value = (value & 0x0F0F0F0F) + ((value >>> 4) & 0x0F0F0F0F);
value = (value & 0x00FF00FF) + ((value >>> 8) & 0x00FF00FF);
value = (value & 0x0000FFFF) + ((value >>> 16) & 0x0000FFFF);
return value;
}


You can almost see it already, but if you write the hexadecimal constants above in binary you will realise that this is quite an elegant solution: the masks look like a tree:

01010101010101010101010101010101
00110011001100110011001100110011
00001111000011110000111100001111
00000000111111110000000011111111
00000000000000001111111111111111


This elegance comes at a small cost. There are various profitable transformations, the simplest of which is the elision of the redundant final mask. The others are more involved and are covered in depth in chapter 5 of Hacker’s Delight. The end result can be seen in the Integer class.


@HotSpotIntrinsicCandidate
public static int bitCount(int i) {
// HD, Figure 5-2
i = i - ((i >>> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >>> 2) & 0x33333333);
i = (i + (i >>> 4)) & 0x0f0f0f0f;
i = i + (i >>> 8);
i = i + (i >>> 16);
return i & 0x3f;
}


The method above is intrinsified by C2 to the instruction popcntd and this method is the only way to access the instruction from Java. If it’s not already obvious, the power of having this access can be shown with a comparative benchmark.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit
intrinsic thrpt 1 10 341.572057 1.983535 ops/us
lookupTable thrpt 1 10 205.373131 0.557472 ops/us
masks thrpt 1 10 191.744272 1.942700 ops/us
naive thrpt 1 10 26.651332 0.101285 ops/us
skipUnsetBits thrpt 1 10 94.125249 0.559893 ops/us

Despite its power, since no vectorisation of this operation is possible prior to the AVX-512 VPOPCNTD/VPOPCNTQ extension (available virtually nowhere), loops containing popcnt can quickly become bottlenecks. Looking beneath the surface is intriguing. I’m sure with explicit vectorisation the lookup approach could be powerful.

# Building RoaringBitmaps from Streams

RoaringBitmap is a fast compressed bitset format. In the Java implementation of Roaring, it was until recently preferential to build a bitset in one go from sorted data; there were performance penalties of varying magnitude for incremental or unordered insertions. In a recent pull request, I wanted to improve incremental monotonic insertion so I could build bitmaps from streams, but sped up unsorted batch creation significantly by accident.

### Incremental Ordered Insertion

If you want to build a bitmap, you can do so efficiently with the RoaringBitmap.bitmapOf factory method.


int[] data = ...
RoaringBitmap bitmap = RoaringBitmap.bitmapOf(data);


However, I often find that I want to stream integers into a bitmap. Given that the integers being inserted into a bitmap often represent indices into an array, such a stream is likely to be monotonic. You might implement this like so:


IntStream stream = ...
RoaringBitmap bitmap = new RoaringBitmap();


While this is OK, it has a few inefficiencies compared to the batch creation method.

• Indirection: the container being written to must be located on each insertion
• Eagerness: the cardinality must be kept up to date on each insertion
• Allocation pressure: the best container type can’t be known in advance. Choice of container may change as data is inserted, this requires allocations of new instances.

You could also collect the stream into an int[] and use the batch method, but it could be a large temporary object with obvious drawbacks.

### OrderedWriter

The solution I proposed is to create a writer object (OrderedWriter) which allocates a small buffer of 8KB, to use as a bitmap large enough to cover 16 bits. The stream to bitmap code becomes:


IntStream stream = ...
RoaringBitmap bitmap = new RoaringBitmap();
OrderedWriter writer = new OrderedWriter(bitmap);
writer.flush(); // clear the buffer out


This is implemented so that changes in key (where the most significant 16 bits of each integer is stored) trigger a flush of the buffer.


short key = Util.highbits(value);
short low = Util.lowbits(value);
if (key != currentKey) {
if (Util.compareUnsigned(key, currentKey) < 0) {
throw new IllegalStateException("Must write in ascending key order");
}
flush();
}
int ulow = low & 0xFFFF;
bitmap[(ulow >>> 6)] |= (1L << ulow);
currentKey = key;
dirty = true;
}


When a flush occurs, a container type is chosen and appended to the bitmap’s prefix index.


public void flush() {
if (dirty) {
RoaringArray highLowContainer = underlying.highLowContainer;
// we check that it's safe to append since RoaringArray.append does no validation
if (highLowContainer.size > 0) {
short key = highLowContainer.getKeyAtIndex(highLowContainer.size - 1);
if (Util.compareUnsigned(currentKey, key) <= 0) {
throw new IllegalStateException("Cannot write " + currentKey + " after " + key);
}
}
highLowContainer.append(currentKey, chooseBestContainer());
clearBitmap();
dirty = false;
}
}


There are significant performance advantages in this approach. There is no indirection cost, and no searches in the prefix index for containers: the writes are just buffered. The buffer is small enough to fit in cache, and containers only need to be created when the writer is flushed, which happens whenever a new key is seen, or when flush is called manually. During a flush, the cardinality can be computed in one go, the best container can be chosen, and run optimisation only has to happen once. Computing the cardinality is the only bottleneck – it requires 1024 calls to Long.bitCount which can’t be vectorised in a language like Java. It can’t be incremented on insertion without either sacrificing idempotence or incurring the cost of a membership check. After the flush, the buffer needs to be cleared, using a call to Arrays.fill which is vectorised. So, despite the cost of the buffer, this can be quite efficient.

This approach isn’t universally applicable. For instance, you must write data in ascending order of the most significant 16 bits. You must also remember to flush the writer when you’re finished: until you’ve called flush, the data in the last container may not be in the bitmap. For my particular use case, this is reasonable. However, there are times when this is not fit for purpose, such as if you are occasionally inserting values and expect them to be available to queries immediately. In general, if you don’t know when you’ll stop adding data to the bitmap, this isn’t a good fit because you won’t know when to call flush.

### Benchmark

The RoaringBitmap project is a big user of JMH, and most pull requests require benchmarks as evidence of performance improvement. I benchmarked the two approaches, varying bitmap sizes and randomness (likelihood of there not being a compressible run), and was amazed to find that this approach actually beats having a sorted array and using RoaringBitmap.bitmapOf. Less surprising was beating the existing API for incremental adds (this was the goal in the first place). Lower is better:

Benchmark (randomness) (size) Mode Cnt Score Error Units
buildRoaringBitmap 0.1 10000 avgt 5 54.263 3.393 us/op
buildRoaringBitmap 0.1 100000 avgt 5 355.188 15.234 us/op
buildRoaringBitmap 0.1 1000000 avgt 5 3567.839 135.149 us/op
buildRoaringBitmap 0.1 10000000 avgt 5 31982.046 1227.325 us/op
buildRoaringBitmap 0.5 10000 avgt 5 53.855 0.887 us/op
buildRoaringBitmap 0.5 100000 avgt 5 357.671 14.111 us/op
buildRoaringBitmap 0.5 1000000 avgt 5 3556.152 243.671 us/op
buildRoaringBitmap 0.5 10000000 avgt 5 34385.971 3864.143 us/op
buildRoaringBitmap 0.9 10000 avgt 5 59.354 10.385 us/op
buildRoaringBitmap 0.9 100000 avgt 5 374.245 54.485 us/op
buildRoaringBitmap 0.9 1000000 avgt 5 3712.684 657.964 us/op
buildRoaringBitmap 0.9 10000000 avgt 5 37223.976 4691.297 us/op
incrementalNativeAdd 0.1 10000 avgt 5 115.213 31.909 us/op
incrementalNativeAdd 0.1 100000 avgt 5 911.925 127.922 us/op
incrementalNativeAdd 0.1 1000000 avgt 5 8889.49 320.821 us/op
incrementalNativeAdd 0.1 10000000 avgt 5 102819.877 14247.868 us/op
incrementalNativeAdd 0.5 10000 avgt 5 116.878 28.232 us/op
incrementalNativeAdd 0.5 100000 avgt 5 947.076 128.255 us/op
incrementalNativeAdd 0.5 1000000 avgt 5 7190.443 202.012 us/op
incrementalNativeAdd 0.5 10000000 avgt 5 98843.303 4325.924 us/op
incrementalNativeAdd 0.9 10000 avgt 5 101.694 6.579 us/op
incrementalNativeAdd 0.9 100000 avgt 5 816.411 65.678 us/op
incrementalNativeAdd 0.9 1000000 avgt 5 9114.624 412.152 us/op
incrementalNativeAdd 0.9 10000000 avgt 5 108793.694 22562.527 us/op
incrementalUseOrderedWriter 0.1 10000 avgt 5 23.573 5.962 us/op
incrementalUseOrderedWriter 0.1 100000 avgt 5 289.588 36.814 us/op
incrementalUseOrderedWriter 0.1 1000000 avgt 5 2785.659 49.385 us/op
incrementalUseOrderedWriter 0.1 10000000 avgt 5 29489.758 2601.39 us/op
incrementalUseOrderedWriter 0.5 10000 avgt 5 23.57 1.536 us/op
incrementalUseOrderedWriter 0.5 100000 avgt 5 276.488 9.662 us/op
incrementalUseOrderedWriter 0.5 1000000 avgt 5 2799.408 198.77 us/op
incrementalUseOrderedWriter 0.5 10000000 avgt 5 28313.626 1976.042 us/op
incrementalUseOrderedWriter 0.9 10000 avgt 5 22.345 1.574 us/op
incrementalUseOrderedWriter 0.9 100000 avgt 5 280.205 36.987 us/op
incrementalUseOrderedWriter 0.9 1000000 avgt 5 2779.732 93.456 us/op
incrementalUseOrderedWriter 0.9 10000000 avgt 5 30568.591 2140.826 us/op

These benchmarks don’t go far enough to support replacing RoaringBitmap.bitmapOf.

### Unsorted Input Data

In the cases benchmarked, this approach seems to be worthwhile. I can’t actually think of a case where someone would want to build a bitmap from unsorted data, but it occurred to me that this approach might be fast enough to cover the cost of a sort. OrderedWriter is also relaxed enough that it only needs the most significant 16 bits to be monotonic, so a full sort isn’t necessary. Implementing a radix sort on the most significant 16 bits (stable in the least significant 16 bits), prior to incremental insertion via an OrderedWriter, leads to huge increases in performance over RoaringBitmap.bitmapOf. The implementation is as follows:


public static RoaringBitmap bitmapOfUnordered(final int... data) {
RoaringBitmap bitmap = new RoaringBitmap();
OrderedWriter writer = new OrderedWriter(bitmap);
for (int i : data) {
}
writer.flush();
return bitmap;
}


It did very well, according to benchmarks, even against various implementations of sort prior to RoaringBitmap.bitmapOf. Lower is better:

Benchmark (randomness) (size) Mode Cnt Score Error Units
bitmapOf 0.1 10000 avgt 5 1058.106 76.013 us/op
bitmapOf 0.1 100000 avgt 5 12323.905 976.68 us/op
bitmapOf 0.1 1000000 avgt 5 171812.526 9593.879 us/op
bitmapOf 0.1 10000000 avgt 5 3376296.157 170362.195 us/op
bitmapOf 0.5 10000 avgt 5 1096.663 477.795 us/op
bitmapOf 0.5 100000 avgt 5 12836.177 1674.54 us/op
bitmapOf 0.5 1000000 avgt 5 171998.126 4176 us/op
bitmapOf 0.5 10000000 avgt 5 3707804.439 974532.361 us/op
bitmapOf 0.9 10000 avgt 5 1124.881 65.673 us/op
bitmapOf 0.9 100000 avgt 5 14585.589 1894.788 us/op
bitmapOf 0.9 1000000 avgt 5 198506.813 8552.218 us/op
bitmapOf 0.9 10000000 avgt 5 3723942.934 423704.363 us/op
bitmapOfUnordered 0.1 10000 avgt 5 174.583 17.475 us/op
bitmapOfUnordered 0.1 100000 avgt 5 1768.613 86.543 us/op
bitmapOfUnordered 0.1 1000000 avgt 5 17889.705 135.714 us/op
bitmapOfUnordered 0.1 10000000 avgt 5 192645.352 6482.726 us/op
bitmapOfUnordered 0.5 10000 avgt 5 157.351 3.254 us/op
bitmapOfUnordered 0.5 100000 avgt 5 1674.919 90.138 us/op
bitmapOfUnordered 0.5 1000000 avgt 5 16900.458 778.999 us/op
bitmapOfUnordered 0.5 10000000 avgt 5 185399.32 4383.485 us/op
bitmapOfUnordered 0.9 10000 avgt 5 145.642 1.257 us/op
bitmapOfUnordered 0.9 100000 avgt 5 1515.845 82.914 us/op
bitmapOfUnordered 0.9 1000000 avgt 5 15807.597 811.048 us/op
bitmapOfUnordered 0.9 10000000 avgt 5 167863.49 3501.132 us/op
partialSortThenBitmapOf 0.1 10000 avgt 5 1060.152 168.802 us/op
partialSortThenBitmapOf 0.1 100000 avgt 5 10942.731 347.583 us/op
partialSortThenBitmapOf 0.1 1000000 avgt 5 100606.506 24705.341 us/op
partialSortThenBitmapOf 0.1 10000000 avgt 5 1035448.545 157383.713 us/op
partialSortThenBitmapOf 0.5 10000 avgt 5 1029.883 100.291 us/op
partialSortThenBitmapOf 0.5 100000 avgt 5 10472.509 832.719 us/op
partialSortThenBitmapOf 0.5 1000000 avgt 5 101144.032 16908.087 us/op
partialSortThenBitmapOf 0.5 10000000 avgt 5 958242.087 39650.946 us/op
partialSortThenBitmapOf 0.9 10000 avgt 5 1008.413 70.999 us/op
partialSortThenBitmapOf 0.9 100000 avgt 5 10458.34 600.416 us/op
partialSortThenBitmapOf 0.9 1000000 avgt 5 103945.644 2026.26 us/op
partialSortThenBitmapOf 0.9 10000000 avgt 5 1065638.269 102257.059 us/op
setupCost 0.1 10000 avgt 5 6.577 0.121 us/op
setupCost 0.1 100000 avgt 5 61.378 24.113 us/op
setupCost 0.1 1000000 avgt 5 1021.588 536.68 us/op
setupCost 0.1 10000000 avgt 5 13182.341 196.773 us/op
setupCost 0.5 10000 avgt 5 7.139 2.216 us/op
setupCost 0.5 100000 avgt 5 60.847 23.395 us/op
setupCost 0.5 1000000 avgt 5 800.888 14.711 us/op
setupCost 0.5 10000000 avgt 5 13431.625 553.44 us/op
setupCost 0.9 10000 avgt 5 6.599 0.09 us/op
setupCost 0.9 100000 avgt 5 60.946 22.511 us/op
setupCost 0.9 1000000 avgt 5 813.445 4.896 us/op
setupCost 0.9 10000000 avgt 5 13374.943 349.314 us/op
sortThenBitmapOf 0.1 10000 avgt 5 636.23 13.423 us/op
sortThenBitmapOf 0.1 100000 avgt 5 7411.756 174.264 us/op
sortThenBitmapOf 0.1 1000000 avgt 5 92299.305 3651.161 us/op
sortThenBitmapOf 0.1 10000000 avgt 5 1096374.443 162575.234 us/op
sortThenBitmapOf 0.5 10000 avgt 5 634.957 47.447 us/op
sortThenBitmapOf 0.5 100000 avgt 5 7939.074 409.328 us/op
sortThenBitmapOf 0.5 1000000 avgt 5 93505.427 5409.749 us/op
sortThenBitmapOf 0.5 10000000 avgt 5 1147933.592 57485.51 us/op
sortThenBitmapOf 0.9 10000 avgt 5 661.072 6.717 us/op
sortThenBitmapOf 0.9 100000 avgt 5 7915.506 356.148 us/op
sortThenBitmapOf 0.9 1000000 avgt 5 93403.343 5454.583 us/op
sortThenBitmapOf 0.9 10000000 avgt 5 1095960.734 85753.917 us/op

It looks like there are good performance gains available here, but these things tend to depend on particular data sets. I would be interested in hearing from anyone who has tried to use this class in a real application.

# Matrix Multiplication Revisited

In a recent post, I took a look at matrix multiplication in pure Java, to see if it can go faster than reported in SIMD Intrinsics on Managed Language Runtimes. I found faster implementations than the paper’s benchmarks implied was possible. Nevertheless, I found that there were some limitations in Hotspot’s autovectoriser that I didn’t expect to see, even in JDK10. Were these limitations somehow fundamental, or can other compilers do better with essentially the same input?

I took a look at the code generated by GCC’s autovectoriser to see what’s possible in C/C++ without resorting to complicated intrinsics. For a bit of fun, I went over to the dark side to squeeze out some a lot of extra performance, which gave inspiration to a simple vectorised Java implementation which can maintain intensity as matrix size increases.

### Background

The paper reports a 5x improvement in matrix multiplication throughput as a result of using LMS generated intrinsics. Using GCC as LMS’s backend, I easily reproduced very good throughput, but I found two Java implementations better than the paper’s baseline. The best performing Java implementation proposed in the paper was blocked. This post is not about the LMS benchmarks, but this code is this post’s inspiration.


public void blocked(float[] a, float[] b, float[] c, int n) {
int BLOCK_SIZE = 8;
// GOOD: attempts to do as much work in submatrices
// GOOD: tries to avoid bringing data through cache multiple times
for (int kk = 0; kk < n; kk += BLOCK_SIZE) {
for (int jj = 0; jj < n; jj += BLOCK_SIZE) {
for (int i = 0; i < n; i++) {
for (int j = jj; j < jj + BLOCK_SIZE; ++j) {
// BAD: manual unrolling, bypasses optimisations
float sum = c[i * n + j];
for (int k = kk; k < kk + BLOCK_SIZE; ++k) {
// BAD: horizontal sums are inefficient
sum += a[i * n + k] * b[k * n + j];
}
c[i * n + j] = sum;
}
}
}
}
}


I proposed the following implementation for improved cache efficiency and expected it to vectorise automatically.

public void fast(float[] a, float[] b, float[] c, int n) {
// GOOD: 2x faster than "blocked" - why?
int in = 0;
for (int i = 0; i < n; ++i) {
int kn = 0;
for (int k = 0; k < n; ++k) {
float aik = a[in + k];
// MIXED: passes over c[in:in+n] multiple times per k-value, "free" if n is small
// MIXED: reloads b[kn:kn+n] repeatedly for each i, bad if n is large, "free" if n is small
// BAD: doesn't vectorise but should
for (int j = 0; j < n; ++j) {
c[in + j] += aik * b[kn + j]; // sequential writes and reads, cache and vectoriser friendly
}
kn += n;
}
in += n;
}
}


My code actually doesn’t vectorise, even in JDK10, which really surprised me because the inner loop vectorises if the offsets are always zero. In any case, there is a simple hack involving the use of buffers, which unfortunately thrashes the cache, but narrows the field significantly.


public void fastBuffered(float[] a, float[] b, float[] c, int n) {
float[] bBuffer = new float[n];
float[] cBuffer = new float[n];
int in = 0;
for (int i = 0; i < n; ++i) {
int kn = 0;
for (int k = 0; k < n; ++k) {
float aik = a[in + k];
System.arraycopy(b, kn, bBuffer, 0, n);
saxpy(n, aik, bBuffer, cBuffer);
kn += n;
}
System.arraycopy(cBuffer, 0, c, in, n);
Arrays.fill(cBuffer, 0f);
in += n;
}
}


I left the problem looking like this, with the “JDKX vectorised” lines using the algorithm above with a buffer hack:

### GCC Autovectorisation

The Java code is very easy to translate into C/C++. Before looking at performance I want to get an idea of what GCC’s autovectoriser does. I want to see the code generated at GCC optimisation level 3, with unrolled loops, FMA, and AVX2, which can be seen as follows:

g++ -mavx2 -mfma -march=native -funroll-loops -O3 -S mmul.cpp

The generated assembly code can be seen in full context here. Let’s look at the mmul_saxpy routine first:

static void mmul_saxpy(const int n, const float* left, const float* right, float* result) {
int in = 0;
for (int i = 0; i < n; ++i) {
int kn = 0;
for (int k = 0; k < n; ++k) {
float aik = left[in + k];
for (int j = 0; j < n; ++j) {
result[in + j] += aik * right[kn + j];
}
kn += n;
}
in += n;
}
}


This routine uses SIMD instructions, which means in principle any other compiler could do this too. The inner loop has been unrolled, but this is only by virtue of the -funroll-loops flag. C2 does this sort of thing as standard, but only for hot loops. In general you might not want to unroll loops because of the impact on code size, and it’s great that a JIT compiler can decide only to do this when it’s profitable.

.L9:
vmovups  (%rdx,%rax), %ymm4
addl  $8, %r10d vmovaps %ymm4, (%r11,%rax) vmovups 32(%rdx,%rax), %ymm5 vfmadd213ps 32(%rbx,%rax), %ymm3, %ymm5 vmovaps %ymm5, 32(%r11,%rax) vmovups 64(%rdx,%rax), %ymm1 vfmadd213ps 64(%rbx,%rax), %ymm3, %ymm1 vmovaps %ymm1, 64(%r11,%rax) vmovups 96(%rdx,%rax), %ymm2 vfmadd213ps 96(%rbx,%rax), %ymm3, %ymm2 vmovaps %ymm2, 96(%r11,%rax) vmovups 128(%rdx,%rax), %ymm4 vfmadd213ps 128(%rbx,%rax), %ymm3, %ymm4 vmovaps %ymm4, 128(%r11,%rax) vmovups 160(%rdx,%rax), %ymm5 vfmadd213ps 160(%rbx,%rax), %ymm3, %ymm5 vmovaps %ymm5, 160(%r11,%rax) vmovups 192(%rdx,%rax), %ymm1 vfmadd213ps 192(%rbx,%rax), %ymm3, %ymm1 vmovaps %ymm1, 192(%r11,%rax) vmovups 224(%rdx,%rax), %ymm2 vfmadd213ps 224(%rbx,%rax), %ymm3, %ymm2 vmovaps %ymm2, 224(%r11,%rax) addq$256, %rax
cmpl  %r10d, 24(%rsp)
ja  .L9


The mmul_blocked routine is compiled to quite convoluted assembly. It has a huge problem with the expression right[k * n + j], which requires a gather and is almost guaranteed to create 8 cache misses per block for large matrices. Moreover, this inefficiency gets much worse with problem size.

static void mmul_blocked(const int n, const float* left, const float* right, float* result) {
int BLOCK_SIZE = 8;
for (int kk = 0; kk < n; kk += BLOCK_SIZE) {
for (int jj = 0; jj < n; jj += BLOCK_SIZE) {
for (int i = 0; i < n; i++) {
for (int j = jj; j < jj + BLOCK_SIZE; ++j) {
float sum = result[i * n + j];
for (int k = kk; k < kk + BLOCK_SIZE; ++k) {
sum += left[i * n + k] * right[k * n + j]; // second read here requires a gather
}
result[i * n + j] = sum;
}
}
}
}
}


This compiles to assembly with the unrolled vectorised loop below:

.L114:
cmpq  %r10, %r9
setbe  %cl
cmpq  56(%rsp), %r8
setnb  %dl
orl  %ecx, %edx
cmpq  %r14, %r9
setbe  %cl
cmpq  64(%rsp), %r8
setnb  %r15b
orl  %ecx, %r15d
andl  %edx, %r15d
cmpq  %r11, %r9
setbe  %cl
cmpq  48(%rsp), %r8
setnb  %dl
orl  %ecx, %edx
andl  %r15d, %edx
cmpq  %rbx, %r9
setbe  %cl
cmpq  40(%rsp), %r8
setnb  %r15b
orl  %ecx, %r15d
andl  %edx, %r15d
cmpq  %rsi, %r9
setbe  %cl
cmpq  32(%rsp), %r8
setnb  %dl
orl  %ecx, %edx
andl  %r15d, %edx
cmpq  %rdi, %r9
setbe  %cl
cmpq  24(%rsp), %r8
setnb  %r15b
orl  %ecx, %r15d
andl  %edx, %r15d
cmpq  %rbp, %r9
setbe  %cl
cmpq  16(%rsp), %r8
setnb  %dl
orl  %ecx, %edx
andl  %r15d, %edx
cmpq  %r12, %r9
setbe  %cl
cmpq  8(%rsp), %r8
setnb  %r15b
orl  %r15d, %ecx
testb  %cl, %dl
je  .L111
leaq  32(%rax), %rdx
cmpq  %rdx, %r8
setnb  %cl
cmpq  %rax, %r9
setbe  %r15b
orb  %r15b, %cl
je  .L111
vmovups  (%r8), %ymm2
vmovups  %ymm0, (%r8)


### Benchmarks

I implemented a suite of benchmarks to compare the implementations. You can run them, but since they measure throughput and intensity averaged over hundreds of iterations per matrix size, the full run will take several hours.

g++ -mavx2 -mfma -march=native -funroll-loops -O3 mmul.cpp -o mmul.exe && ./mmul.exe > results.csv

The saxpy routine wins, with blocked fading fast after a middling start.

name size throughput (ops/s) flops/cycle
blocked 64 22770.2 4.5916
saxpy 64 25638.4 5.16997
blocked 128 2736.9 4.41515
saxpy 128 4108.52 6.62783
blocked 192 788.132 4.29101
saxpy 192 1262.45 6.87346
blocked 256 291.728 3.76492
saxpy 256 521.515 6.73044
blocked 320 147.979 3.72997
saxpy 320 244.528 6.16362
blocked 384 76.986 3.35322
saxpy 384 150.441 6.55264
blocked 448 50.4686 3.4907
saxpy 448 95.0752 6.57594
blocked 512 30.0085 3.09821
saxpy 512 65.1842 6.72991
blocked 576 22.8301 3.35608
saxpy 576 44.871 6.59614
blocked 640 15.5007 3.12571
saxpy 640 32.3709 6.52757
blocked 704 12.2478 3.28726
saxpy 704 25.3047 6.79166
blocked 768 8.69277 3.02899
saxpy 768 19.8011 6.8997
blocked 832 7.29356 3.23122
saxpy 832 15.3437 6.7976
blocked 896 4.95207 2.74011
saxpy 896 11.9611 6.61836
blocked 960 3.4467 2.34571
saxpy 960 9.25535 6.29888
blocked 1024 2.02289 1.67082
saxpy 1024 6.87039 5.67463

With GCC autovectorisation, saxpy performs well, maintaining intensity as size increases, albeit well below the theoretical capacity. It would be nice if similar code could be JIT compiled in Java.

### Intel Intrinsics

To understand the problem space a bit better, I find out how fast matrix multiplication can get without domain expertise by handcrafting an algorithm with intrinsics. My laptop’s Skylake chip (turbo boost and hyperthreading disabled) is capable of 32 SP flops per cycle per core – Java and the LMS implementation previously fell a long way short of that. It was difficult getting beyond 4f/c with Java, and LMS peaked at almost 6f/c before quickly tailing off. GCC autovectorisation achieved and maintained 7f/c.

To start, I’ll take full advantage of the facility to align the matrices on 64 byte intervals, since I have 64B cache lines, though this might just be voodoo. I take the saxpy routine and replace its kernel with intrinsics. Because of the -funroll-loops option, this will get unrolled without effort.

static void mmul_saxpy_avx(const int n, const float* left, const float* right, float* result) {
int in = 0;
for (int i = 0; i < n; ++i) {
int kn = 0;
for (int k = 0; k < n; ++k) {
__m256 aik = _mm256_set1_ps(left[in + k]);
int j = 0;
for (; j < n; j += 8) {
}
for (; j < n; ++j) {
result[in + j] += left[in + k] * right[kn + j];
}
kn += n;
}
in += n;
}
}


This code is actually not a lot faster, if at all, than the basic saxpy above: a lot of aggressive optimisations have already been applied.

### Combining Blocked and SAXPY

What makes blocked so poor is the gather and the cache miss, not the concept of blocking itself. A limiting factor for saxpy performance is that the ratio of loads to floating point operations is too high. With this in mind, I tried combining the blocking idea with saxpy, by implementing saxpy multiplications for smaller sub-matrices. This results in a different algorithm with fewer loads per floating point operation, and the inner two loops are swapped. It avoids the gather and the cache miss in blocked. Because the matrices are in row major format, I make the width of the blocks much larger than the height. Also, different heights and widths make sense depending on the size of the matrix, so I choose them dynamically. The design constraints are to avoid gathers and horizontal reduction.

static void mmul_tiled_avx(const int n, const float *left, const float *right, float *result) {
const int block_width = n >= 256 ? 512 : 256;
const int block_height = n >= 512 ? 8 : n >= 256 ? 16 : 32;
for (int row_offset = 0; row_offset < n; row_offset += block_height) {
for (int column_offset = 0; column_offset < n; column_offset += block_width) {
for (int i = 0; i < n; ++i) {
for (int j = column_offset; j < column_offset + block_width && j < n; j += 8) {
__m256 sum = _mm256_load_ps(result + i * n + j);
for (int k = row_offset; k < row_offset + block_height && k < n; ++k) {
sum = _mm256_fmadd_ps(_mm256_set1_ps(left[i * n + k]), _mm256_load_ps(right + k * n + j), sum);
}
_mm256_store_ps(result + i * n + j, sum);
}
}
}
}
}


You will see in the benchmark results that this routine really doesn’t do very well compared to saxpy. Finally, I unroll it, which is profitable despite setting -funroll-loops because there is slightly more to this than an unroll. This is a sequence of vertical reductions which have no data dependencies.

static void mmul_tiled_avx_unrolled(const int n, const float *left, const float *right, float *result) {
const int block_width = n >= 256 ? 512 : 256;
const int block_height = n >= 512 ? 8 : n >= 256 ? 16 : 32;
for (int column_offset = 0; column_offset < n; column_offset += block_width) {
for (int row_offset = 0; row_offset < n; row_offset += block_height) {
for (int i = 0; i < n; ++i) {
for (int j = column_offset; j < column_offset + block_width && j < n; j += 64) {
__m256 sum1 = _mm256_load_ps(result + i * n + j);
__m256 sum2 = _mm256_load_ps(result + i * n + j + 8);
__m256 sum3 = _mm256_load_ps(result + i * n + j + 16);
__m256 sum4 = _mm256_load_ps(result + i * n + j + 24);
__m256 sum5 = _mm256_load_ps(result + i * n + j + 32);
__m256 sum6 = _mm256_load_ps(result + i * n + j + 40);
__m256 sum7 = _mm256_load_ps(result + i * n + j + 48);
__m256 sum8 = _mm256_load_ps(result + i * n + j + 56);
for (int k = row_offset; k < row_offset + block_height && k < n; ++k) {
__m256 multiplier = _mm256_set1_ps(left[i * n + k]);
sum2 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 8), sum2);
sum3 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 16), sum3);
sum4 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 24), sum4);
sum5 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 32), sum5);
sum6 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 40), sum6);
sum7 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 48), sum7);
sum8 = _mm256_fmadd_ps(multiplier, _mm256_load_ps(right + k * n + j + 56), sum8);
}
_mm256_store_ps(result + i * n + j, sum1);
_mm256_store_ps(result + i * n + j + 8, sum2);
_mm256_store_ps(result + i * n + j + 16, sum3);
_mm256_store_ps(result + i * n + j + 24, sum4);
_mm256_store_ps(result + i * n + j + 32, sum5);
_mm256_store_ps(result + i * n + j + 40, sum6);
_mm256_store_ps(result + i * n + j + 48, sum7);
_mm256_store_ps(result + i * n + j + 56, sum8);
}
}
}
}
}


This final implementation is fast, and is probably as good as I am going to manage, without reading papers. This should be a CPU bound problem because the algorithm is O(n^3) whereas the problem size is O(n^2). But the flops/cycle decreases with problem size in all of these implementations. It’s possible that this could be amelioarated by a better dynamic tiling policy. I’m unlikely to be able to fix that.

It does make a huge difference being able to go very low level – handwritten intrinsics with GCC unlock awesome throughput – but it’s quite hard to actually get to the point where you can beat a good optimising compiler. Mind you, there are harder problems to solve this, and you may well be a domain expert.

The benchmark results summarise this best:

name size throughput (ops/s) flops/cycle
saxpy_avx 64 49225.7 9.92632
tiled_avx 64 33680.5 6.79165
tiled_avx_unrolled 64 127936 25.7981
saxpy_avx 128 5871.02 9.47109
tiled_avx 128 4210.07 6.79166
tiled_avx_unrolled 128 15997.6 25.8072
saxpy_avx 192 1603.84 8.73214
tiled_avx 192 1203.33 6.55159
tiled_avx_unrolled 192 4383.09 23.8638
saxpy_avx 256 633.595 8.17689
tiled_avx 256 626.157 8.0809
tiled_avx_unrolled 256 1792.52 23.1335
saxpy_avx 320 284.161 7.1626
tiled_avx 320 323.197 8.14656
tiled_avx_unrolled 320 935.571 23.5822
saxpy_avx 384 161.517 7.03508
tiled_avx 384 188.215 8.19794
tiled_avx_unrolled 384 543.235 23.6613
saxpy_avx 448 99.1987 6.86115
tiled_avx 448 118.588 8.2022
tiled_avx_unrolled 448 314 21.718
saxpy_avx 512 70.0296 7.23017
tiled_avx 512 73.2019 7.55769
tiled_avx_unrolled 512 197.815 20.4233
saxpy_avx 576 46.1944 6.79068
tiled_avx 576 50.6315 7.44294
tiled_avx_unrolled 576 126.045 18.5289
saxpy_avx 640 33.8209 6.81996
tiled_avx 640 37.0288 7.46682
tiled_avx_unrolled 640 92.784 18.7098
saxpy_avx 704 24.9096 6.68561
tiled_avx 704 27.7543 7.44912
tiled_avx_unrolled 704 69.0399 18.53
saxpy_avx 768 19.5158 6.80027
tiled_avx 768 21.532 7.50282
tiled_avx_unrolled 768 54.1763 18.8777
saxpy_avx 832 12.8635 5.69882
tiled_avx 832 14.6666 6.49766
tiled_avx_unrolled 832 37.9592 16.8168
saxpy_avx 896 12.0526 6.66899
tiled_avx 896 13.3799 7.40346
tiled_avx_unrolled 896 34.0838 18.8595
saxpy_avx 960 8.97193 6.10599
tiled_avx 960 10.1052 6.87725
tiled_avx_unrolled 960 21.0263 14.3098
saxpy_avx 1024 6.73081 5.55935
tiled_avx 1024 7.21214 5.9569
tiled_avx_unrolled 1024 12.7768 10.5531

### Can we do better in Java?

Writing genuinely fast code gives an indication of how little of the processor Java actually utilises, but is it possible to bring this knowledge over to Java? The saxpy based implementations in my previous post performed well for small to medium sized matrices. Once the matrices grow, however, they become too big to be allowed to pass through cache multiple times: we need hot, small cached data to be replenished from the larger matrix. Ideally we wouldn’t need to make any copies, but it seems that the autovectoriser doesn’t like offsets: System.arraycopy is a reasonably fast compromise. The basic sequential read pattern is validated: even native code requiring a gather does not perform well for this problem. The best effort C++ code translates almost verbatim into this Java code, which is quite fast for large matrices.


public void tiled(float[] a, float[] b, float[] c, int n) {
final int bufferSize = 512;
final int width = Math.min(n, bufferSize);
final int height = Math.min(n, n >= 512 ? 8 : n >= 256 ? 16 : 32);
float[] sum = new float[bufferSize];
float[] vector = new float[bufferSize];
for (int rowOffset = 0; rowOffset < n; rowOffset += height) {
for (int columnOffset = 0; columnOffset < n; columnOffset += width) {
for (int i = 0; i < n; ++i) {
for (int j = columnOffset; j < columnOffset + width && j < n; j += width) {
int stride = Math.min(n - columnOffset, bufferSize);
// copy to give autovectorisation a hint
System.arraycopy(c, i * n + j, sum, 0, stride);
for (int k = rowOffset; k < rowOffset + height && k < n; ++k) {
float multiplier = a[i * n + k];
System.arraycopy(b, k * n  + j, vector, 0, stride);
for (int l = 0; l < stride; ++l) {
sum[l] = Math.fma(multiplier, vector[l], sum[l]);
}
}
System.arraycopy(sum, 0, c, i * n + j, stride);
}
}
}
}
}


Benchmarking it using the same harness used in the previous post, the performance is ~10% higher for large arrays than my previous best effort. Still, the reality is that this is too slow to be useful. If you need to do linear algebra, use C/C++ for the time being!

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size flops/cycle
fastBuffered thrpt 1 10 53.331195 0.270526 ops/s 448 3.688688696
fastBuffered thrpt 1 10 34.365765 0.16641 ops/s 512 3.548072999
fastBuffered thrpt 1 10 26.128264 0.239719 ops/s 576 3.840914622
fastBuffered thrpt 1 10 19.044509 0.139197 ops/s 640 3.84031059
fastBuffered thrpt 1 10 14.312154 1.045093 ops/s 704 3.841312378
fastBuffered thrpt 1 10 7.772745 0.074598 ops/s 768 2.708411991
fastBuffered thrpt 1 10 6.276182 0.067338 ops/s 832 2.780495238
fastBuffered thrpt 1 10 4.8784 0.067368 ops/s 896 2.699343067
fastBuffered thrpt 1 10 4.068907 0.038677 ops/s 960 2.769160387
fastBuffered thrpt 1 10 2.568101 0.108612 ops/s 1024 2.121136502
tiled thrpt 1 10 56.495366 0.584872 ops/s 448 3.907540754
tiled thrpt 1 10 30.884954 3.221017 ops/s 512 3.188698735
tiled thrpt 1 10 15.580581 0.412654 ops/s 576 2.290381075
tiled thrpt 1 10 9.178969 0.841178 ops/s 640 1.850932038
tiled thrpt 1 10 12.229763 0.350233 ops/s 704 3.282408783
tiled thrpt 1 10 9.371032 0.330742 ops/s 768 3.265334889
tiled thrpt 1 10 7.727068 0.277969 ops/s 832 3.423271628
tiled thrpt 1 10 6.076451 0.30305 ops/s 896 3.362255222
tiled thrpt 1 10 4.916811 0.2823 ops/s 960 3.346215151
tiled thrpt 1 10 3.722623 0.26486 ops/s 1024 3.074720008

# Autovectorised FMA in JDK10

Fused-multiply-add (FMA) allows floating point expressions of the form a * x + b to be evaluated in a single instruction, which is useful for numerical linear algebra. Despite the obvious appeal of FMA, JVM implementors are rather constrained when it comes to floating point arithmetic because Java programs are expected to be reproducible across versions and target architectures. FMA does not produce precisely the same result as the equivalent multiplication and addition instructions (this is caused by the compounding effect of rounding) so its use is a change in semantics rather than an optimisation; the user must opt in. To the best of my knowledge, support for FMA was first proposed in 2000, along with reorderable floating point operations, which would have been activated by a fastfp keyword, but this proposal was withdrawn. In Java 9, the intrinsic Math.fma was introduced to provide access to FMA for the first time.

### DAXPY Benchmark

A good use case to evaluate Math.fma is DAXPY from the Basic Linear Algebra Subroutine library. The code below will compile with JDK9+

@OutputTimeUnit(TimeUnit.MILLISECONDS)
public class DAXPY {

double s;

@Setup(Level.Invocation)
public void init() {
}

@Benchmark
public void daxpyFMA(DoubleData state, Blackhole bh) {
double[] a = state.data1;
double[] b = state.data2;
for (int i = 0; i < a.length; ++i) {
a[i] = Math.fma(s, b[i], a[i]);
}
bh.consume(a);
}

@Benchmark
public void daxpy(DoubleData state, Blackhole bh) {
double[] a = state.data1;
double[] b = state.data2;
for (int i = 0; i < a.length; ++i) {
a[i] += s * b[i];
}
bh.consume(a);
}
}


Running this benchmark with Java 9, you may wonder why you bothered because the code is actually slower.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
daxpy thrpt 1 10 25.011242 2.259007 ops/ms 100000
daxpy thrpt 1 10 0.706180 0.046146 ops/ms 1000000
daxpyFMA thrpt 1 10 15.334652 0.271946 ops/ms 100000
daxpyFMA thrpt 1 10 0.623838 0.018041 ops/ms 1000000

This is because using Math.fma disables autovectorisation. Taking a look at PrintAssembly you can see that the naive daxpy routine exploits AVX2, whereas daxpyFMA reverts to scalar usage of SSE2.

// daxpy routine, code taken from main vectorised loop
vmovdqu ymm1,ymmword ptr [r10+rdx*8+10h]
vmulpd  ymm1,ymm1,ymm2
vmovdqu ymmword ptr [r8+rdx*8+10h],ymm1

// daxpyFMA routine
vmovsd  xmm2,qword ptr [rcx+r13*8+10h]
vmovsd  qword ptr [rcx+r13*8+10h],xmm2


Not to worry, this seems to have been fixed in JDK 10. Since Java 10’s release is around the corner, there are early access builds available for all platforms. Rerunning this benchmark, FMA no longer incurs costs, and it doesn’t bring the performance boost some people might expect. The benefit is that there is less floating point error because the total number of roundings is halved.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
daxpy thrpt 1 10 2582.363228 116.637400 ops/ms 1000
daxpy thrpt 1 10 405.904377 32.364782 ops/ms 10000
daxpy thrpt 1 10 25.210111 1.671794 ops/ms 100000
daxpy thrpt 1 10 0.608660 0.112512 ops/ms 1000000
daxpyFMA thrpt 1 10 2650.264580 211.342407 ops/ms 1000
daxpyFMA thrpt 1 10 389.274693 43.567450 ops/ms 10000
daxpyFMA thrpt 1 10 24.941172 2.393358 ops/ms 100000
daxpyFMA thrpt 1 10 0.671310 0.158470 ops/ms 1000000

// vectorised daxpyFMA routine, code taken from main loop (you can still see the old code in pre/post loops)
vmovdqu ymm0,ymmword ptr [r9+r13*8+10h]
vmovdqu ymmword ptr [r9+r13*8+10h],ymm0


# How a Bitmap Index Works

Bitmap indices are used in various data technologies for efficient query processing. At a high level, a bitmap index can be thought of as a physical materialisation of a set of predicates over a data set, is naturally columnar and particularly good for multidimensional boolean query processing. PostgreSQL materialises a bitmap index on the fly from query predicates when there are multiple attributes constrained by a query (for instance in a compound where clause). The filter caches in ElasticSearch and Solr are implemented as bitmap indices on filter predicates over document IDs. Pilosa is a distributed bitmap index query engine built in Go, with a Java client library.

Bitmap indices are not a one-size-fits-all data structure, and in degenerate cases can take up more space than the data itself; using a bitmap index in favour of a B-tree variant on a primary key should be considered an abuse. Various flavours of bitmap implementation exist, but the emerging de facto standard is RoaringBitmap led by Daniel Lemire. RoaringBitmap is so ubiquitous that it is handled as a special case by KryoSerializer – no registration required if you want to use Spark to build indices.

#### Naive Bitmap Index

To introduce the concept, let’s build a naive uncompressed bitmap index. Let’s say you have a data set and some way of assigning an integer index, or record index from now on, to each record (the simplest example would be the line number in a CSV file), and have chosen some attributes to be indexed. For each distinct value of each indexed attribute of your data, compute the set of indices of records matching the predicate. For each attribute, create a map from the attribute values to the sets of corresponding record indices. The format used for the set doesn’t matter yet, but either an int[] or BitSet would be suitable depending on properties of the data and the predicate (cardinality of the data set, sort order of the data set, cardinality of the records matching the predicate, for instance). Using a BitSet to encode the nth record index as the nth bit of the BitSet can be 32x smaller than an int[] in some cases, and can be much larger when there are many distinct values of an attribute, which results in sparse bitmaps.

The tables below demonstrate the data structure. The first table represents a simple data set. The second and third tables represent bitmap indices on the data set, indexing the Country and Sector attributes.

Record Index Country Sector
0 GB Financials
1 DE Manufacturing
2 FR Agriculturals
3 FR Financials
4 GB Energies

The bitmap index consists of the two tables below:

Country
Value Record Indices Bitmap
GB 0,4 0x10001
DE 1 0x10
FR 2,3 0x1100
Sector
Value Record Indices Bitmap
Financials 0,3 0x1001
Manufacturing 1 0x10
Agriculturals 2 0x100
Energies 4 0x10000

It’s worth noting three patterns in the tables above.

1. The number of bitmaps for an attribute is the attribute’s distinct count.
2. There are typically runs of zeroes or ones, and the lengths of these runs depend on the sort order of the record index attribute
3. A bitmap index on the record index attribute itself would be as large as the data itself, and a much less concise representation. Bitmap indices do not compete with B-tree indices for primary key attributes.

#### Query Evaluation

This simple scheme effectively materialises the result of predicates on the data and is particularly appealing because these predicates can be composed by performing efficient logical operations on the bitmaps. Query evaluation is most efficient when both the number of bitmaps and size of each bitmap are as small as possible. An efficient query plan will touch as few bitmaps as possible, regardless of bitmap size. Here are some examples of queries and their evaluation plans.

##### Single Attribute Union


select *
from records
where country = "GB" or country = "FR"


1.  Access the country index, read the bitmaps for values “FR” and “GB”
2. Apply a bitwise logical OR to get a new bitmap
3. Access the data stored by record id with the retrieved indices
##### Multi Attribute Intersection


select *
from records
where country = "GB" and sector = "Energies"


1. Access the country index, and read the bitmap for value “GB”
2. Access the sector index, and read the bitmap for value “Energies”.
3. Apply a bitwise logical AND to get a new bitmap
4. Access the data stored by record id with the retrieved indices
##### Single Attribute Except Clause


select *
from records
where country <> "GB"


1. Access the country index, and read the bitmap for value “GB”
2. Negate the bitmap
3. Access the data stored by record id with the retrieved indices

The index lends itself to aggregate queries (and aggregates on predicates)

##### Count


select country, count(*)
from records
group by country


1. Access the country index
2. Iterate over the keys
• Count the bits in the bitmap, store the count against the key in a map
##### Count with Filter


select country, count(*)
from records
where sector <> "Financials"
group by country


1. Access the sector index and read the bitmap for “Financials”
2. Negate the bitmap, call the negated bitmap without_financials
3. Access the country index
4. Iterate over the keys
• Intersect each bitmap with without_financials
• Count the bits in the resultant bitmap, store the count against the key in a map

The two main factors affecting the performance of query processing are the number of bitmaps that need to be accessed, and the size of each bitmap (which concerns both memory/disk usage and CPU utilisation) – both should be minimised. Choosing the correct encoding for expected queries (one should expect range queries for dates, but equality and set membership queries for enumerations) can reduce the number of bitmap accesses required to evaluate a query; whereas compression can reduce the bitmap sizes.

#### Encoding

Only predicates for equality are efficient with the scheme so far. Suppose there is a well defined sort order for an attribute $a$. In order to evaluate


select *
from records
where a > x and a < y


every bitmap in the range $(x, y)$ would have to be accessed and united. This could easily become a performance bottleneck. The encoding could be adapted for evaluating range predicates. Instead of setting the nth bit if the nth record has $a = y$ (equality encoding), it could be set if the nth record has $a \le y$ (range encoding). In this encoding only one bitmap would need to be accessed to evaluate a predicate like $a \le y$, rather than the $|[a_{min}, y]|$ bitmaps required using the equality encoding. In order to evaluate $a \in [x, y]$, only the bitmaps for $x$ and $y$ are needed. Not much is lost in order to support equality predicates in a range encoding; only the bitmap for the value and its predecessor are required.

#### Compression

The scheme presented so far works as a toy model but the bitmaps are just too large to be practical. A bitmap index on a single attribute with $m$ distinct values over a data set consisting of $n$ records, using a BitSet would consume $mn$ bits, using an int[] would consume $32mn$ bits. Therefore, some kind of compression is required to make the approach feasible.

Often in real world data sets, there are attributes with skewed histograms, a phenomenon known as Zipf’s Law. In a typical financial data set exhibiting this property, most trades will be in very few instruments (EURGBP for instance), and there will be very few trades in the rest of the traded instruments. The bitmaps for the values at both the head and tail of these histograms become less random and therefore compressible. At the head, the bits are mostly set; at the tail mostly unset. Compression seeks to exploit this.

One well understood compression strategy is run length encoding. If there are $m$ bits set in a row, followed by $n$ unset bits and again followed by $p$ bits set, 0x1…10..01..1 of size $m + n + p$ bit could be replaced by $m1n0p1$ which is typically a lot smaller (though worse if the runs are very short). Since there are only two possible values, only ranges of set bits need to be represented – it is only necessary to store the start index and length of each run, so the bitmap becomes the set of tuples $\{(0,m), (m+n, p)\}$. Notably the sort order of the record index with respect to the attribute affects the compression ratio for run length encoding because it can make runs longer or shorter.

In reality, run length encoding on bits is not practical since modern processors operate on words not bits. Instead of counting runs of bits, several algorithms count runs of bytes (BBC – Byte-aligned Bitmap Compression) or words (WAH – Word Aligned Hybrid, EWAH – Enhanced Word Aligned Hybrid). These algorithms are faster at the expense of reduced compression. In these schemes compression is improved when there are long runs of clean words (where all the bits in the word are the same), and the compression ratio is degraded when there are many dirty words, which cannot be compressed at all. The number of clean words and hence the compression ratio for a bitmap index depends on the order of the record index attribute. However, an optimal sort order with respect to an index on one attribute will generally be sub-optimal for an index on another.

In order to maintain the attractive boolean query processing capabilities, the OR, AND, XOR, NAND, NOR and NOT operations each need to redefined to operate on the compressed form of the bitmap, and in the case of algorithms like EWAH these adapted operations are more efficient, CPU and cache-friendly, than on naive uncompressed bitmaps.

Previously I was ambivalent between the use of BitSet and int[] to encode the sets of record indices (Set was not proposed because of the inherent cost of wrapped integers). This is because neither of these types is really appropriate for the task in all cases. If we use an uncompressed BitSet we end up with high memory usage for a large data set, even if most of the bits are unset, which is often compressible at the word level. With very sparse data, when most of the bits would be zero, it would take less space to store the record indices in an int[] instead. By choosing dynamically whether to use integers, uncompressed bits, or compressed words is actually roughly how the RoaringBitmap library optimises performance. More about that here.

#### Reducing the Number of Bitmaps

Query evaluation performance degrades with the number of bitmaps that need to be accessed. Choosing the right encoding for query patterns and reducing the size of each bitmap are both key for performance and practicality, but it can help save storage space to have fewer bitmaps per attribute. So far each value has been encoded as a single bitmap, and each bitmap has been associated with only one value. The total number of bitmaps can be reduced by applying a factorisation on values with a bitmap per factor, so each bitmap will be associated with several values and each value will be associated with several bitmaps. To this end, mapping values into integers by some means would allow integer arithmetic to be used for index construction. A simple mechanism to map a set of objects to integers would be a dictionary, a more complicated but better mechanism might be a perfect hash function like CHM (an order preserving transformation!) or BZW (which trades order preservation for better compression).

##### Bit Slicing

Supposing a mapping between a set of values and the natural numbers has been chosen and implemented, we can define a basis to factorise each value. The number 571, for example, can be written down as either $5*10^2 + 7*10^1 + 1*10^0$ in base-10 or $1*8^3 + 0*8^2 + 7*8^1 + 3*8^0$ in base-8. Base-10 uses more coefficients, whereas base-8 uses more digits. Bit slice indexing is analogous to arithmetic expansion of integers, mapping coefficients to sets, or slices, of bitmaps; digits to bitmaps.

Mapping a set of objects $S$ into base $n$, where $\log_n(|S|) \approx \mathcal{O}(m)$, we can use $mn$ bitmaps to construct the index. The bases do not need to be identical (to work with date buckets we could choose to have four quarters, three months, and 31 days for example) but if they are the bases are said to be uniform. An example of a base 3 uniform index on currencies is below:

Records
Record Index Currency
0 USD
1 GBP
2 GBP
3 EUR
4 CHF
Currency Encoding
Currency Code Base 3 Expansion
USD 0 0*3 + 0
GBP 1 0*3 + 1
EUR 2 0*3 + 2
CHF 3 1*3 + 0
Single Component Currency Index
Currency Bitmap
USD 0x1
GBP 0x110
EUR 0x1000
CHF 0x10000
Bit Sliced Currency Index
Power of 3 Remainder Bitmap
1 0 0x1111
1 1 0x10000
1 2 0x0
0 0 0x10001
0 1 0x110
0 2 0x1000

Here we have actually used six bitmaps instead of four, but the factorisation comes into its own when more currencies are added. With a 2-component base-3 index, we can use six bitmaps to encode up to nine values.

To evaluate a query, map the value into its integer representation, factorise the number with respect to the bases of the index, and then intersect at most $m$ bitmaps together. This is slower than a single bitmap access, but has some useful properties if data is hierarchically bucketed as well as trading query performance for storage space. To evaluate queries at the top level of bucketing, only one bitmap access is required; at the second level, two bitmap accesses are required and so on. So there is a trade off between degraded performance with granular values with increased performance for roll-ups.