Blocked Signatures

The interesting thing about BitFunnel, the search architecture used by Bing, is its unorthodoxy; it revisits many ideas ignored by contemporary search technologies. The paper’s references go back to the 80s, when one million records was considered an enormous database. Something about this appeals to me, that there might be long forgotten ideas waiting to be reinterpreted and recombined in the context of modern micro-architectures.

A bit-sliced signature arrangement reduces the number of words which must be processed to evaluate a query, but it’s not enough for BitFunnel’s purposes. The last piece of background in the BitFunnel paper is blocked signatures, which are discussed in the 1990 paper A signature file scheme based on multiple organizations for indexing very large text databases by Kent, Sacks-Davis, Ramamohanarao (KS-DR). Blocked signatures further reduce the amount of processing per query, at what can be an acceptable false positive rate. In this post I aim to piece their data structure together in modern Java.

Formulation

The goal is to map documents into blocks consisting of a fixed number of documents (referred to as the blocking factor in the BitFunnel paper) so only bit sliced block signatures need be stored, where a block signature is a bloom filter of the terms in a block of documents. There are a variety of ways of doing this but they all start with assigning an integer label to each document prior to block assignment. This topic is covered at length in KS-DR.

The most obvious technique is to assign contiguous ranges of document IDs to blocks of size N, that is the function i -> Math.floorDiv(i, N). This is only useful if blocks of document signatures are also stored, acting as a top level index into those document signatures. Physically, there is a block index, which is the bit sliced signature of the terms in each block, and separate blocks of document signatures, again bit sliced by term. Queries can be evaluated by constructing a bloom filter from the query’s terms, specifying a set of bit slices in the block index to inspect and intersect. The result of the intersection gives the IDs of the document signature blocks to query. This is like a two level tree, and is better than a signature scan over all the document signatures, but why not just bit slice the document signatures? For rare terms, once a block is located, it does cut down the number of words in each slice to be intersected. However, the real problem with this technique, besides storage, is the cost of a false match at the block level: it results in a document level query, touching N bits, but yields nothing. The BitFunnel blocked signatures generalise this two level hierarchical arrangement for multiple levels.

This post goes on a tangent from BitFunnel here, focusing on the ideas put forward in KS-DR. An alternative is to choose a number M > N coprime to C, an estimate of the capacity of the index, and use the function i -> Math.floorDiv(M * i % C, N) to permute records prior to blocking, then make a copy of the block index for each of several values of M. If you choose, say, two values of M, when evaluating queries, you can map the query terms and get the matching blocks from each representation as before. There is no need for a document level query or storage though. If you have a bitmap of the document IDs (not the signatures) for each block, you can intersect the document bitmaps to get the document IDs matching the query (with false positives, the number of which reduces with the number of copies). In the KS-DR paper, this bitmap I assume the existence of is actually computed on the fly via an expensive reverse mapping with the help of a lookup table.

Java Implementation

The code is very similar to the bit sliced signature code, because a significant part of querying is a bit sliced lookup of block IDs, which requires storage of a bit matrix. The major difference is the requirement for block assignment and ultimately block intersection. I encapsulate this in a BlockSet which contains Blocks and is responsible for block assignment and intersection.

Details of block creation (blocking factor, bit matrix dimensions, hashing policy) can be hidden behind a supplier interface.


public class BlockFactory<D extends Supplier<Set<T>> & IntSupplier, T, Q extends Set<T>> implements Supplier<Block<D, T, Q>> {

    private final List<ToIntFunction<T>> hashes;
    private final int blockingFactor;
    private final int blockCapacity;

    public BlockFactory(List<ToIntFunction<T>> hashes, int blockingFactor, int blockCapacity) {
        this.hashes = hashes;
        this.blockingFactor = blockingFactor;
        this.blockCapacity = blockCapacity;
    }

    @Override
    public Block<D, T, Q> get() {
        return new Block<>(blockingFactor, blockCapacity, hashes);
    }
}

This gives us blocks, which is really just a wrapper around a bit matrix of terms and a bit set of document IDs. It can do three things

  1. Index a document, this requires that it knows the blocking factor (the number of blocks it can index), the hash functions and the bloom filter size.
  2. Check if the block might contain at least one document matching all the terms
  3. Share its document IDs

The code looks quite similar to my previous bit sliced signature implementation.


public class Block<D extends Supplier<Set<T>> & IntSupplier, T, Q extends Set<T>> {

    private final BitSet documentIds;
    private final long[][] bitMatrix;
    private final int capacity;
    private final List<ToIntFunction<T>> hashFunctions;
    private int docIndex = 0;

    public Block(int blockingFactor, int capacity, List<ToIntFunction<T>> hashFunctions) {
        assert Integer.bitCount(capacity) == 1;
        this.documentIds = new BitSet();
        this.bitMatrix = new long[capacity >> 6][blockingFactor];
        this.capacity = capacity;
        this.hashFunctions = hashFunctions;
    }

    public void add(D doc) {
        int docIndex = this.docIndex++;
        int docWordIndex = docIndex >>> 6;
        long docWord = 1L << docIndex;
        mapTerms(doc.get()).forEach(r -> bitMatrix[r][docWordIndex] |= docWord);
        documentIds.set(doc.getAsInt());
    }

    public void contribute(BitSet result) {
        result.or(documentIds);
    }

    public boolean matches(Q query) {
        int[] rows = mapTerms(query).distinct().toArray();
        return IntStream.range(0, capacity >> 6)
                        .filter(i -> hasMatch(rows, i))
                        .findFirst()
                        .isPresent();
    }

    private boolean hasMatch(int[] rows, int offset) {
        long word = 0L;
        for (int i = 0; i < rows.length && word == 0; ++i) {
            word |= bitMatrix[rows[i]][offset];
        }
        return word != 0;
    }

    private IntStream mapTerms(Set<T> terms) {
        return terms.stream().flatMapToInt(t -> hashFunctions.stream().mapToInt(f -> mapHash(f.applyAsInt(t))));
    }

    private int mapHash(int hash) {
        return hash & -hash & (capacity - 1);
    }
}

Now a level up. A BlockIndex has a BlockSet for each relatively prime factor. When evaluating a query, it passes the query to each of its BlockSets, retrieving all blocks which probably match the query.


public class BlockSet<D extends Supplier<Set<T>> & IntSupplier, T, Q extends Set<T>> {

    private final Block[] blocks;
    private final Supplier<Block<D, T, Q>> newBlock;
    private final int blockingFactor;
    private final int estimatedCapacity;
    private final int prime;

    public BlockSet(Supplier<Block<D, T, Q>> newBlock, int blockingFactor, int estimatedCapacity, int prime) {
        assert Integer.bitCount(blockingFactor) == 1 && Integer.bitCount(estimatedCapacity) == 1;
        this.newBlock = newBlock;
        this.blocks = new Block[estimatedCapacity/blockingFactor];
        for (int i = 0; i < blocks.length; ++i) {
            blocks[i] = newBlock.get();
        }
        this.blockingFactor = blockingFactor;
        this.estimatedCapacity = estimatedCapacity;
        this.prime = prime;
    }

    public void add(D doc) {
        int blockIndex = blockIndex(doc.getAsInt());
        Block<D, T, Q> block = (Block<D, T, Q>)blocks[blockIndex];
        block.add(doc);
    }

    public BitSet query(Q query) {
        BitSet result = new BitSet();
        Arrays.stream(blocks)
              .filter(b -> b.matches(query))
              .forEach(b -> b.contribute(result));
        return result;
    }

    private int blockIndex(int value) {
        return ((value * prime) & (estimatedCapacity - 1)) / blockingFactor;
    }
}

With a BlockIndex as the tip of an iceberg – it just needs to intersect the bit sets of document IDs.


public class BlockIndex<D extends Supplier<Set<T>> & IntSupplier, T, Q extends Set<T>> {

    private final List<BlockSet<D, T, Q>> blockSets;

    public BlockIndex(List<BlockSet<D, T, Q>> blockSets) {
        this.blockSets = blockSets;
    }

    public IntStream query(Q query) {
        BitSet result = null;
        for (BlockSet<D, T, Q> blockSet : blockSets) {
            BitSet docIds = blockSet.query(query);
            if (null == result) {
                result = docIds;
            } else {
                result.and(docIds);
            }
        }
        return null == result ? IntStream.of() : result.stream();
    }
}

This code is obviously experimental, but a problem with it as it stands is memory consumption with the temporary bit sets. A better, but less Java 8+ compliant bit set is RoaringBitmap.

Blocked Signatures in BitFunnel

Blocked Signatures are a very old idea, naturally it is reported that there are a few innovations in the BitFunnel data structure. BitFunnel uses multiple levels with blocking factors, each of which must be a proper power of 2, rather than multiple factors coprime to estimated capacity at the same level. Each level has rank = log(blockingFactor). The effect in BitFunnel is having several levels of blocking density. Blocks from different levels can be intersected efficiently by transforming dense blocks to rank zero (the least dense representation) prior to intersection.

Bit-Sliced Signatures and Bloom Filters

While the inverted index is a familiar indexing structure for many with a casual interest in information retrieval, the concept of a document signature may not be. The idea seems to have a long history, having been popular until it lost favour in the late 90s, owing to file sizes and performance empirically inferior to inverted indices. However, there has been new research into the use of signatures for search problems in the last decade, for instance, TopSig, and they are an important building block of the BitFunnel data structure.

Information Retrieval with Signatures

An m bit signature of a document consists of a bit array of fixed length m containing the (not guaranteed to be disjoint) union of the signatures of the terms found in the document. The signature of a term t is the m bit array with each jth bit set where j = hash(k, t) mod m, for each of k hash functions. This is obviously a Bloom filter by another name. In a recent post I noted that a Bloom filter’s behaviour can be implemented efficiently in less than ten lines of Java code.

In an information retrieval setting, a document signature would be computed for each document in a corpus by tokenising each document and building a Bloom filter from the extracted terms. To retrieve the documents matching a given query, the query signature (which is the union of the signatures of the query’s terms) is computed. In what I will call the signature scan phase of the retrieval, the document signatures are iterated over; each document being included if its signature contains all the bits in the query signature. The result set will contain false positives but should also be quite small, so a cheap filter for correctness is executed after the signature scan is complete.

While the hash function and ordering of the documents could be controlled creatively to cut down the number of signatures to be inspected for a given query, this approach is obviously inefficient, despite the appeal of low level bit-wise instructions. What happens when there are several billion documents in a corpus?

Parallelism and Bit-Slicing

Threads and SIMD instructions could be thrown at the signature scan to speed it up, but one approach to parallelising the retrieval is to increase the number of documents represented by each processed word. If there are n documents with m bit signatures, the signature list is an n * m bit matrix. Its transpose, an m * n bit matrix, is referred to as a bit sliced signature. When querying the bit sliced signature, only the rows specified in the query need be accessed and intersected, and each word represents the presence of a term in up to 64 documents. This is a very old technique – the the earliest formulation of a variant of this data structure I could find, where it was referred to as superimposed coding, was published in 1984, but references implementations from the 60s. An accessible evaluation was published in 1994.

Java Implementation

An implementation of such a structure for an immutable corpus is trivial but informative. Typically terms will be strings but needn’t be, whereas documents can be reduced to a set of terms. Using ToIntFunction to abstract hash functions again, we just need to map all term-document tuples into the bits of a long[][]. When querying the structure, we need to map the query’s terms into a sorted sequence of integers, determining which rows of the bit matrix to access.

On the way in, rather than using the hash functions to compute the bit to set (this is constant for each document), the row index is calculated. For each term, for each hash function, the appropriate row index is calculated and the document’s bit is set in the corresponding array. Clean Java would do this outside of the constructor, of course.


public class BitSlicedSignature<D extends Supplier<Set<T>>, T, Q extends Set<T>> {

    private final long[][] bitMatrix;
    private final int width;
    private final int height;
    private final List<ToIntFunction<T>> hashFunctions;

    public BitSlicedSignature(List<D> documents,
                              List<ToIntFunction<T>> hashFunctions,
                              int height) {
        this.hashFunctions = hashFunctions;
        this.width = (documents.size() + 63) / 64;
        this.height = height;
        this.bitMatrix = new long[height][width];
        int docIndex = 0;
        for (D doc : documents) {
            int docWordIndex = docIndex >> 6;
            long docWord = 1L << docIndex;
            for (T term : doc.get()) {
                for (ToIntFunction<T> hash : hashFunctions) {
                    int row = mapHash(hash.applyAsInt(term));
                    bitMatrix[row][docWordIndex] |= docWord;
                }
            }
            ++docIndex;
        }
    }

    private int mapHash(int hash) {
        return Math.abs(hash % height);
    }
}

To query the structure, the query is mapped into row indices and the corresponding rows are intersected word by word, matching document IDs are emitted lazily as an IntStream. The appeal of doing this lazily is that we should expect there to be a lot of documents, this way the bit-wise intersections can be done in chunks as and when the caller wants more documents. This can be achieved with the help of two utility methods:


    public IntStream query(Q query) {
        int[] rows = query.stream()
                          .flatMapToInt(t -> hashFunctions.stream().mapToInt(h -> mapHash(h.applyAsInt(t))))
                          .distinct()
                          .toArray();
        return IntStream.range(0, width).flatMap(i -> bitsOf(intersection(rows, i), i));
    }

    private long intersection(int[] rows, int offset) {
        long word = -1L;
        for (int i = 0; i < rows.length && word != 0; ++i) {
            word &= bitMatrix[rows[i]][offset];
        }
        return word;
    }

    private static IntStream bitsOf(long word, int offset) {
        return IntStream.range(0, Long.SIZE)
                        .filter(i -> (1L << i & word) != 0)
                        .map(i -> Long.SIZE * offset + i);
    }

As you can probably see, you can leave vast swathes of the long[][] untouched, assuming the query is for a small number of terms. A more sophisticated implementation might partition the documents into several bit matrices.

Shortcomings

There are some obvious problems with this data structure. Firstly, a long[][] uses the same amount of memory whether its bits are set or not. What happens when you have some small documents and lots of terms? You have a column in the bit matrix where most of the bits are zero – it’s likely that a compressed bit set would be preferable. Similarly with very common terms you will have long horizontal runs of set bits.

Even worse, what happens when a term is very rare? If you are providing a search service, it’s likely you only ever need to provide a page of results at a time. If the term is rare enough, you may need to scan the entire row to fill a page, which could take a long time. To get around that, BitFunnel uses bit-sliced block signatures, which I will write about in the next post.

Building a Bloom Filter from Scratch

The Bloom filter is an interesting data structure for modelling approximate sets of objects. It can tell you with certainty that an object is not in a collection, but may give false positives. If you write Java for a living, I would not suggest you implement your own, because there is a perfectly good implementation in Guava. It can be illuminating to write one from scratch, however.

Interface

You can do two things with a Bloom filter: put things in it, and check if the filter probably contains items. This leads to the following interface:


public class BloomFilter<T> {
    void add(T value);

    boolean mightContain(T value);
}

Java Bloom Filter Implementation

A Bloom filter represents a set of objects quite succinctly as a bit array of finite length. Unlike a bitset, the bit array stores hashes of the objects, rather than object identities. In practice, you need multiple hash functions, which, modulo the capacity of the bit array, will collide only with low probability. The more hashes you have, the slower insertions and look ups will be. There is also a space trade off for improved accuracy: the larger the array, the less likely collisions of hashes modulo the capacity will be.

Since you probably want to be able to control the hash functions you use, the interface ToIntFunction fits in nicely as the perfect abstraction. You can set this up simply with a builder.


    public static <T> Builder<T> builder() {
        return new Builder<>();
    }

    public static class Builder<T> {
        private int size;
        private List<ToIntFunction<T>> hashFunctions;

        public Builder<T> withSize(int size) {
            assert Integer.bitCount(size) == 1; 
            this.size = size;
            return this;
        }

        public Builder<T> withHashFunctions(List<ToIntFunction<T>> hashFunctions) {
            this.hashFunctions = hashFunctions;
            return this;
        }

        public BloomFilter<T> build() {
            return new BloomFilter<>(new long[size], size, hashFunctions);
        }
    }

    private final long[] array;
    private final int size;
    private final List<ToIntFunction<T>> hashFunctions;

    public BloomFilter(long[] array, int logicalSize, List<ToIntFunction<T>> hashFunctions) {
        this.array = array;
        this.size = logicalSize;
        this.hashFunctions = hashFunctions;
    }

    private int mapHash(int hash) {
        return hash & (size - 1);
    }

Adding Values

To add a value, you need to take an object, and for each hash function hash it modulo the capacity of the bit array. Using a long[] as the substrate of the bit set you must locate the word each hash belongs to, and set the bit corresponding to the remainder of the hash modulo 64. You can do this quickly as follows:


   public void add(T value) {
        for (ToIntFunction<T> function : hashFunctions) {
            int hash = mapHash(function.applyAsInt(value));
            array[hash >>> 6] |= 1L << hash;
        }
    }

Note that each bit may already be set.

Querying the bit set

To check if an object is contained in the bitset, you need to evaluate the hashes again, and find the corresponding words. You can return false if the intersection of the appropriate word and the bit corresponding to the remainder modulo 64 is zero. That looks like this:


    public boolean mightContain(T value) {
        for (ToIntFunction<T> function : hashFunctions) {
            int hash = mapHash(function.applyAsInt(value));
            if ((array[hash >>> 6] & (1L << hash)) == 0) {
                return false;
            }
        }
        return true;
    }

Note that this absolutely does not mean the object is contained within the set, but you can get more precise results if you are willing to perform more hash evaluations, and if you are willing to use a larger bit array. Modeling the precise false positive rate is not clear cut.

BitFunnel

Just as is the case for bitmap indices, bit slicing is a useful enhancement for Bloom filters, forming the basis of BitFunnel. In a follow up post I will share a simplified implementation of a bit sliced Bloom filter.

Confusing Sets and Lists

Optimisation is rarely free: often point improvements incur costs elsewhere. Sometimes we try to optimise away the constraints we observe, without questioning our underlying assumptions. I have often seen the roles of lists and sets confused. An application can be brought to its knees – that is, cease to deliver commercial value – if List.contains is called frequently enough on big enough lists. And then there is the workaround…

When I moved over to Java from C++ several years ago, it seemed utterly crazy that there was even a Collection interface – exactly what Scott Meier’s Effective STL said not to do. I still think it’s crazy. Sets and lists cannot be substituted, and when you add up the marginal costs, as well as the costs of any compensatory workarounds, confusing them is responsible for a lot of performance bugs. As an application developer, it is part of your job to choose. Here are a few simple examples of when to use each collection type.

Contains

Is an element in the collection?

Never ever do this with a List. This operation is often referred to as being O(n). Which means in your worst case will touch every element in the list (technically, at least once). You have a choice between HashSet and a TreeSet, and both have costs and benefits.

If you choose a HashSet, your best case is O(1): you evaluate a hash code, take its modulus with respect to the size of an array, and look up a bucket containing only one element. The worst case occurs with a degenerate hash code which maps all elements to the same bucket. This is again O(n): you probe a linked list testing each element for equality. On average you get something between these two cases and it depends on the uniformity of your hash code implementation.

If you choose a TreeSet you get a worst case O(log n): this is effectively just a binary search through the nodes in a red black tree. Performance is limited by the cost of the comparator, and suffers systematically from cache misses for large sets (like any kind of pointer chasing, branch prediction and prefetching is difficult if not impossible).

If you’re working with numbers, and small to medium collections, a sorted primitive array may be the best approach, so long as it fits in cache. If you’re working with integers, you can do this in constant time in the worst case by using a BitSet.

Select

What is the value of the element at a given index with respect to a sort order?

This is an obvious use case for a List: it’s O(1) – this is just a lookup at an array offset.

You couldn’t even write the code with a HashSet without copying the data into an intermediate ordered structure, at which point you would probably think again. You see this sort of thing done in code written by inexpensive programmers at large outsourcing consultancies, who were perhaps just under unreasonable pressure to deliver to arbitrary deadlines.

SortedSet, and anything conceptually similar, is the wrong data structure for this operation. The only way to compute this is O(n): you iterate through the set incrementing a counter until you reach the index, and then return the element you’ve iterated to. If you reach the end of the set, you throw. If you do this a lot, you’ll notice.

Rank

How many predecessors does an element have with respect to a sort order?

Another classic operation for List, so long as you keep it sorted. Use Collections.binarySearch to find the index of the element in the collection with complexity O(log n). This is its rank. If you can get away with it, primitive arrays will be much better here, especially if they are small enough to fit in cache.

Once again, there are creativity points available for the solution involving a HashSet, and they do exist in the wild, but a clean looking solution is at least possible with a SortedSet. However, it involves an iteration with another check against an incrementing counter. It’s O(n) and if you do it a lot, you’ll blow your performance profile, so use a sorted list instead.

What if you had the source code?

Is this fundamental or just a limitation of the Collections framework? Maybe if you had the source code you could just make these data structures optimal for every use case, without having to choose the right one? Not without creating a Frankenstein, and not without a lot of memory. Optimisation isn’t free.

Roaring TreeMap (English Translation)

A presentation of the Roaring TreeMap data structure translated into English from French Nouveaux modèles d’index bitmap compressés à 64 bits authored by Samy Chambi, Daniel Lemire and Robert Godin.

Roaring TreeMap

The RoaringTreeMap model combines a Java TreeMap with the Roaring bitmap structure to index a set of 64-bit integers represented by the positions of the set bits of a bitmap. TreeMap is an implementation of a Red-Black Tree structure, a self-balancing binary search tree, and is defined amongst other collections in the java.util package. The various tree operations (insertion, search, etc.) are implemented using the algorithms proposed in  (Cormen et al., 2001).

To represent a 64-bit integer, the model splits the integer into two parts. The first part constitutes the 32 most significant bits, and the second part represents the 32 least significant bits. A node of a RoaringTreeMap is composed of a key, which is a 32-bit integer, and a RoaringBitmap. RoaringTreeMap arranges groups of 64-bit integers containing the same 32 most significant bits into the same node. The key of the node stores the 32 most significant bits for the entire group, and the Roaring bitmap associated with the node contains the 32 least significant bits of each integer. RoaringTreeMap utilises a form of prefix compression on the 32 most significant bits of each integer in a group, which can save up to 32 \times 2^{32}-1 bits for such a group.

An insertion or search operation for a 64 bit integer in a RoaringTreeMap starts by performing a random access in the tree to find a node such that the node’s key is equal to the 32 most significant bits of the integer in question. Using a self-balancing binary search tree allows such an operation to be performed with logarithmic time complexity with respect to the total number of nodes in the RoaringTreeMap. If such a node is found, a second insertion or search operation will be performed on the RoaringBitmap associated with the node, which will also have a logarithmic time complexity with respect to the number of entries in the first level index of the Roaring bitmap and the number of entries contained within the accessed container, given that the container is sorted.

Another advantage of the RoaringTreeMap comes from the property of self-balancing binary search trees, which guarantees that the nodes of the tree can always be ordered by the values of their keys in a linear time complexity with respect the number of nodes in the tree. Combined with the ascending sort order of the 32-bit integers maintained within the Roaring bitmaps, this allows the RoaringTreeMap to iterate over the set of 64-bit integers in ascending order in time linearly proportional to the size of set. This is very effective for the computation of basic set operations between two RoaringTreeMaps, such as: intersection, union, symmetric difference, which can be performed in time linearly proportional to the number of integers contained in the tree. Without this property, such an operation would execute with quadratic time complexity with respect to the number of elements in the two sets.

Union of two RoaringTreeMaps

A union takes two RoaringTreeMaps as input and produces a new RoaringTreeMap as output. The algorithm starts with the allocation of a new empty dynamic array which will be filled with the nodes which form the union. Next, the nodes of the two trees are traversed in the ascending order of their keys. During an iteration, if the two current nodes have keys with different values, the container of the node with the smallest key is copied and inserted into the dynamic array, then the algorithm advances to the position of this node in the tree. Otherwise, if the two nodes compared during an iteration have keys of the same value, a logical OR is computed between the Roaring bitmaps associated with each node, and the result is returned as a new Roaring bitmap. A new node containing the Roaring bitmap obtained and a key of equal value to the two nodes will be inserted into the dynamic array. These operations continue until each node in either tree is consumed. At the end, the nodes inserted in the dynamic array will be sorted in the order of their keys, after which a recursive algorithm constructs the RoaringTreeMap from the dynamic array. The algorithm requires time of O(n1 + n2) to traverse the two input trees, where n1 and n2 represent the number of nodes in the two trees respectively. The construction of the dynamic array requires time of O(n1+n2) because it could require up to O(n1+n2) insertions, and each insertion requires constant amortised time. The construction of the final RoaringTreeMap is implemented with an efficient recursive algorithm which runs in a time of O(n1 + n2). So the total execution time for computing the union consisting of the two algorithms is O(n1 + n2), plus the time necessary to compute the logical OR of the Roaring Bitmaps (Chambi et al., 2014, 2015).

Intersection of two RoaringTreeMaps

An intersection takes two RoaringTreeMaps as input and produces a new RoaringTreeMap as output. The algorithm starts by allocating an empty dynamic array  into which the nodes that form the intersection will be written. After, the algorithm iterates over the nodes of the two trees in the ascending order of their keys. In the case where the keys of the nodes differ in value, the algorithm advances to the position of the node with the smallest key. Otherwise, in the case where the two keys have the same value, a logical AND operation will be performed between the Roaring bitmaps associated with each node, which results in a new Roaring bitmap. Then a new node containing the key and the resultant Roaring bitmap is inserted into the next slot in the dynamic array. These operations continue until each node in either tree is consumed. After the iteration is complete, the dynamic array will be sorted in the order of the nodes’ keys. Afterwards a recursive algorithm builds the RoaringTreeMap from the node array. The execution time of the second intersection algorithm depends on the time necessary to traverse the two trees, for populating the dynamic array, and for constructing the tree of the RoaringTreeMap. The total time to complete these operations is of the order of O(n1 + n2) in the worst case, where n1 and n2 represent the number of nodes in the first and second trees respectively. In the best case, the algorithm executes in a time of min(n1, n2), when a traversal of one of the trees is sufficient to arrive at the final result. This does not account for the possible time to compute the logical ANDs between the Roaring bitmaps (Chambi et al., 2014, 2015).

Choosing the Right Radix: Measurement or Mathematics?

I recently wrote a post about radix sorting, and found that for large arrays of unsigned integers a handwritten implementation beats Arrays.sort. However, I paid no attention to the choice of radix and used a default of eight bits. It turns out this was a lucky choice: modifying my benchmark to parametrise the radix, I observed a maximum at one byte, regardless of the size of array.

Is this an algorithmic or technical phenomenon? Is this something that could have been predicted on the back of an envelope without running a benchmark?

Extended Benchmark Results

Size Radix Score Score Error (99.9%) Unit
100 2 135.559923 7.72397 ops/ms
100 4 262.854579 37.372678 ops/ms
100 8 345.038234 30.954927 ops/ms
100 16 7.717496 1.144967 ops/ms
1000 2 13.892142 1.522749 ops/ms
1000 4 27.712719 4.057162 ops/ms
1000 8 52.253497 4.761172 ops/ms
1000 16 7.656033 0.499627 ops/ms
10000 2 1.627354 0.186948 ops/ms
10000 4 3.620869 0.029128 ops/ms
10000 8 6.435789 0.610848 ops/ms
10000 16 3.703248 0.45177 ops/ms
100000 2 0.144575 0.014348 ops/ms
100000 4 0.281837 0.025707 ops/ms
100000 8 0.543274 0.031553 ops/ms
100000 16 0.533998 0.126949 ops/ms
1000000 2 0.011293 0.001429 ops/ms
1000000 4 0.021128 0.003137 ops/ms
1000000 8 0.037376 0.005783 ops/ms
1000000 16 0.031053 0.007987 ops/ms

Modeling

To model the execution time of the algorithm, we can write t = f(r, n), where n \in \mathbb{N} is the length of the input array, and r \in [1, 32) is the size in bits of the radix. We can inspect if the model predicts non-monotonic execution time with a minimum (corresponding to maximal throughput), or if t increases indefinitely as a function of r. If we find a plausible model predicting a minimum, temporarily treating r as continuous, we can solve \frac{\partial f}{\partial r}|_{n=N, r \in [1,32)} = 0 to find the theoretically optimal radix. This pre-supposes we derive a non-monotonic model.

Constructing a Model

We need to write down an equation before we can do any calculus, which requires two dangerous assumptions.

  1. Each operation has the same cost, making the execution time proportional to the number of operation.
  2. The costs of operations do not vary as a function of n or r.

This means all we need to do is find a formula for the number of operations, and then vary n and r. The usual pitfall in this approach relates to the first assumption, in that memory accesses are modelled as uniform cost; memory access can vary widely in cost ranging from registers to RAM on another socket. We are about to fall foul of both assumptions constructing an intuitive model of the algorithm’s loop.


        while (shift < Integer.SIZE) {
            Arrays.fill(histogram, 0);
            for (int i = 0; i < data.length; ++i) {
                ++histogram[((data[i] & mask) >> shift) + 1];
            }
            for (int i = 0; i < 1 << radix; ++i) {
                histogram[i + 1] += histogram[i];
            }
            for (int i = 0; i < data.length; ++i) {
                copy[histogram[(data[i] & mask) >> shift]++] = data[i];
            }
            for (int i = 0; i < data.length; ++i) {
                data[i] = copy[i];
            }
            shift += radix;
            mask <<= radix;
        }

The outer loop depends on the choice of radix while the inner loops depend on the size of the array and the choice of radix. There are five obvious aspects to capture:

  • The first inner loop takes time proportional to n
  • The third and fourth inner loops take time proportional to n
  • We can factor the per-element costs of loops 1, 3 and 4 into a constant a
  • The second inner loop takes time proportional to 2^r, modeled with by the term b2^r
  • The body of the loop executes 32/r times

This can be summarised as the formula:

f(r, n) = 32\frac{(3an + b2^r)}{r}

It was claimed the algorithm had linear complexity in n and it only has a linear term in n. Good. However, the exponential r term in the numerator dominates the linear term in the denominator, making the function monotonic in r. The model fails to predict the observed throughput maximising radix. There are clearly much more complicated mechanisms at play than can be captured counting operations.

Sorting Unsigned Integers Faster in Java

I discovered a curious resource for audio-visualising sort algorithms, which is exciting for two reasons. The first is that I finally feel like I understand Alexander Scriabin: he was not a composer. He discovered Tim Sort 80 years before Tim Peters and called it Black Mass. (If you aren’t familiar with the piece, fast-forward to 1:40 to hear the congruence.)

The second reason was that I noticed Radix Sort (LSD). While it was an affront to my senses, it used a mere 800 array accesses and no comparisons! I was unaware of this algorithm so delved deeper and implemented it for integers, and benchmarked my code against Arrays.sort.

Radix Sort

It is taken as given by many (myself included, or am I just projecting my thoughts on to others?) that O(n \log n) is the best you can do in a sort algorithm. But this is actually only true for sort algorithms which depend on comparison. If you can afford to restrict the data types your sort algorithm supports to types with a positional interpretation (java.util can’t because it needs to be ubiquitous and maintainable), you can get away with a linear time algorithm.

Radix sort, along with the closely related counting sort, does not use comparisons. Instead, the data is interpreted as a fixed length string of symbols. For each position, the cumulative histogram of symbols is computed to calculate sort indices. While the data needs to be scanned several times, the algorithm scales linearly and the overhead of the multiple scans is amortised for large arrays.

As you can see on Wikipedia, there are two kinds of radix sort: Least Significant Digit and Most Significant Digit. This dichotomy relates to the order the (representational) string of symbols is traversed in. I implemented and benchmarked the LSD version for integers.

Implementation

The implementation interprets an integer as the concatenation of n bit string symbols of fixed size size 32/n. It performs n passes over the array, starting with the least significant bits, which it modifies in place. For each pass the data is scanned three times, in order to:

  1. Compute the cumulative histogram over the symbols in their natural sort order
  2. Copy the value with symbol k to the mth position in a buffer, where m is defined by the cumulative density of k.
  3. Copy the buffer back into the original array

The implementation, which won’t work unless the chunks are proper divisors of 32, is below. The bonus (or caveat) is that it automatically supports unsigned integers. The code could be modified slightly to work with signed integers at a performance cost.


import java.util.Arrays;

public class RadixSort {

    private final int radix;

    public RadixSort() {
        this(Byte.SIZE);
    }

    public RadixSort(int radix) {
        assert 32 % radix== 0;
        this.radix= radix;
    }

    public void sort(int[] data) {
        int[] histogram = new int[(1 << radix) + 1];
        int shift = 0;
        int mask = (1 << radix) - 1;
        int[] copy = new int[data.length];
        while (shift < Integer.SIZE) {
            Arrays.fill(histogram, 0);
            for (int i = 0; i < data.length; ++i) {
                ++histogram[((data[i] & mask) >> shift) + 1];
            }
            for (int i = 0; i < 1 << radix; ++i) {
                histogram[i + 1] += histogram[i];
            }
            for (int i = 0; i < data.length; ++i) {
                copy[histogram[(data[i] & mask) >> shift]++] = data[i];
            }
            for (int i = 0; i < data.length; ++i) {
                data[i] = copy[i];
            }
            shift += radix;
            mask <<= radix;
        }
    }
}

The time complexity is obviously linear, a temporary buffer is allocated, but in comparison to Arrays.sort it looks fairly spartan. Instinctively, cache locality looks fairly poor because the second inner loop of the three jumps all over the place. Will this implementation beat Arrays.sort (for integers)?

Benchmark

The algorithm is measured using arrays of random positive integers, for which both algorithms are equivalent, from a range of sizes. This isn’t always the best idea (the Tim Sort algorithm comes into its own on nearly sorted data), so take the result below with a pinch of salt. Care must be taken to copy the array in the benchmark since both algorithms are in-place.


public void launchBenchmark(String... jvmArgs) throws Exception {
        Options opt = new OptionsBuilder()
                .include(this.getClass().getName() + ".*")
                .mode(Mode.SampleTime)
                .mode(Mode.Throughput)
                .timeUnit(TimeUnit.MILLISECONDS)
                .measurementTime(TimeValue.seconds(10))
                .warmupIterations(10)
                .measurementIterations(10)
                .forks(1)
                .shouldFailOnError(true)
                .shouldDoGC(true)
                .jvmArgs(jvmArgs)
                .resultFormat(ResultFormatType.CSV) 
                .build();

        new Runner(opt).run();
    }

    @Benchmark
    public void Arrays_Sort(Data data, Blackhole bh) {
        int[] array = Arrays.copyOf(data.data, data.size);
        Arrays.sort(array);
        bh.consume(array);
    }

    @Benchmark
    public void Radix_Sort(Data data, Blackhole bh) {
        int[] array = Arrays.copyOf(data.data, data.size);
        data.radixSort.sort(array);
        bh.consume(array);
    }

    @State(Scope.Thread)
    public static class Data {

        @Param({"100", "1000", "10000", "100000", "1000000"})
        int size;

        int[] data;
        RadixSort radixSort = new RadixSort();

        @Setup(Level.Trial)
        public void init() {
            data = createArray(size);
        }
    }

    public static int[] createArray(int size) {
        int[] array = new int[size];
        Random random = new Random(0);
        for (int i = 0; i < size; ++i) {
            array[i] = Math.abs(random.nextInt());
        }
        return array;
    }

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
Arrays_Sort thrpt 1 10 1304.687189 147.380334 ops/ms 100
Arrays_Sort thrpt 1 10 78.518664 9.339994 ops/ms 1000
Arrays_Sort thrpt 1 10 1.700208 0.091836 ops/ms 10000
Arrays_Sort thrpt 1 10 0.133835 0.007146 ops/ms 100000
Arrays_Sort thrpt 1 10 0.010560 0.000409 ops/ms 1000000
Radix_Sort thrpt 1 10 404.807772 24.930898 ops/ms 100
Radix_Sort thrpt 1 10 51.787409 4.881181 ops/ms 1000
Radix_Sort thrpt 1 10 6.065590 0.576709 ops/ms 10000
Radix_Sort thrpt 1 10 0.620338 0.068776 ops/ms 100000
Radix_Sort thrpt 1 10 0.043098 0.004481 ops/ms 1000000
Arrays_Sort sample 1 3088586 0.000902 0.000018 ms/op 100
Arrays_Sort·p0.00 sample 1 1 0.000394 ms/op 100
Arrays_Sort·p0.50 sample 1 1 0.000790 ms/op 100
Arrays_Sort·p0.90 sample 1 1 0.000791 ms/op 100
Arrays_Sort·p0.95 sample 1 1 0.001186 ms/op 100
Arrays_Sort·p0.99 sample 1 1 0.001974 ms/op 100
Arrays_Sort·p0.999 sample 1 1 0.020128 ms/op 100
Arrays_Sort·p0.9999 sample 1 1 0.084096 ms/op 100
Arrays_Sort·p1.00 sample 1 1 4.096000 ms/op 100
Arrays_Sort sample 1 2127794 0.011876 0.000037 ms/op 1000
Arrays_Sort·p0.00 sample 1 1 0.007896 ms/op 1000
Arrays_Sort·p0.50 sample 1 1 0.009872 ms/op 1000
Arrays_Sort·p0.90 sample 1 1 0.015408 ms/op 1000
Arrays_Sort·p0.95 sample 1 1 0.024096 ms/op 1000
Arrays_Sort·p0.99 sample 1 1 0.033920 ms/op 1000
Arrays_Sort·p0.999 sample 1 1 0.061568 ms/op 1000
Arrays_Sort·p0.9999 sample 1 1 0.894976 ms/op 1000
Arrays_Sort·p1.00 sample 1 1 4.448256 ms/op 1000
Arrays_Sort sample 1 168991 0.591169 0.001671 ms/op 10000
Arrays_Sort·p0.00 sample 1 1 0.483840 ms/op 10000
Arrays_Sort·p0.50 sample 1 1 0.563200 ms/op 10000
Arrays_Sort·p0.90 sample 1 1 0.707584 ms/op 10000
Arrays_Sort·p0.95 sample 1 1 0.766976 ms/op 10000
Arrays_Sort·p0.99 sample 1 1 0.942080 ms/op 10000
Arrays_Sort·p0.999 sample 1 1 2.058273 ms/op 10000
Arrays_Sort·p0.9999 sample 1 1 7.526102 ms/op 10000
Arrays_Sort·p1.00 sample 1 1 46.333952 ms/op 10000
Arrays_Sort sample 1 13027 7.670135 0.021512 ms/op 100000
Arrays_Sort·p0.00 sample 1 1 6.356992 ms/op 100000
Arrays_Sort·p0.50 sample 1 1 7.634944 ms/op 100000
Arrays_Sort·p0.90 sample 1 1 8.454144 ms/op 100000
Arrays_Sort·p0.95 sample 1 1 8.742502 ms/op 100000
Arrays_Sort·p0.99 sample 1 1 9.666560 ms/op 100000
Arrays_Sort·p0.999 sample 1 1 12.916883 ms/op 100000
Arrays_Sort·p0.9999 sample 1 1 28.037900 ms/op 100000
Arrays_Sort·p1.00 sample 1 1 28.573696 ms/op 100000
Arrays_Sort sample 1 1042 96.278673 0.603645 ms/op 1000000
Arrays_Sort·p0.00 sample 1 1 86.114304 ms/op 1000000
Arrays_Sort·p0.50 sample 1 1 94.896128 ms/op 1000000
Arrays_Sort·p0.90 sample 1 1 104.293990 ms/op 1000000
Arrays_Sort·p0.95 sample 1 1 106.430464 ms/op 1000000
Arrays_Sort·p0.99 sample 1 1 111.223767 ms/op 1000000
Arrays_Sort·p0.999 sample 1 1 134.172770 ms/op 1000000
Arrays_Sort·p0.9999 sample 1 1 134.742016 ms/op 1000000
Arrays_Sort·p1.00 sample 1 1 134.742016 ms/op 1000000
Radix_Sort sample 1 2240042 0.002941 0.000033 ms/op 100
Radix_Sort·p0.00 sample 1 1 0.001578 ms/op 100
Radix_Sort·p0.50 sample 1 1 0.002368 ms/op 100
Radix_Sort·p0.90 sample 1 1 0.003556 ms/op 100
Radix_Sort·p0.95 sample 1 1 0.004344 ms/op 100
Radix_Sort·p0.99 sample 1 1 0.011056 ms/op 100
Radix_Sort·p0.999 sample 1 1 0.027232 ms/op 100
Radix_Sort·p0.9999 sample 1 1 0.731127 ms/op 100
Radix_Sort·p1.00 sample 1 1 5.660672 ms/op 100
Radix_Sort sample 1 2695825 0.018553 0.000038 ms/op 1000
Radix_Sort·p0.00 sample 1 1 0.013424 ms/op 1000
Radix_Sort·p0.50 sample 1 1 0.016576 ms/op 1000
Radix_Sort·p0.90 sample 1 1 0.025280 ms/op 1000
Radix_Sort·p0.95 sample 1 1 0.031200 ms/op 1000
Radix_Sort·p0.99 sample 1 1 0.050944 ms/op 1000
Radix_Sort·p0.999 sample 1 1 0.082944 ms/op 1000
Radix_Sort·p0.9999 sample 1 1 0.830295 ms/op 1000
Radix_Sort·p1.00 sample 1 1 6.660096 ms/op 1000
Radix_Sort sample 1 685589 0.145695 0.000234 ms/op 10000
Radix_Sort·p0.00 sample 1 1 0.112512 ms/op 10000
Radix_Sort·p0.50 sample 1 1 0.128000 ms/op 10000
Radix_Sort·p0.90 sample 1 1 0.196608 ms/op 10000
Radix_Sort·p0.95 sample 1 1 0.225792 ms/op 10000
Radix_Sort·p0.99 sample 1 1 0.309248 ms/op 10000
Radix_Sort·p0.999 sample 1 1 0.805888 ms/op 10000
Radix_Sort·p0.9999 sample 1 1 1.818141 ms/op 10000
Radix_Sort·p1.00 sample 1 1 14.401536 ms/op 10000
Radix_Sort sample 1 60843 1.641961 0.005783 ms/op 100000
Radix_Sort·p0.00 sample 1 1 1.251328 ms/op 100000
Radix_Sort·p0.50 sample 1 1 1.542144 ms/op 100000
Radix_Sort·p0.90 sample 1 1 2.002944 ms/op 100000
Radix_Sort·p0.95 sample 1 1 2.375680 ms/op 100000
Radix_Sort·p0.99 sample 1 1 3.447030 ms/op 100000
Radix_Sort·p0.999 sample 1 1 5.719294 ms/op 100000
Radix_Sort·p0.9999 sample 1 1 8.724165 ms/op 100000
Radix_Sort·p1.00 sample 1 1 13.074432 ms/op 100000
Radix_Sort sample 1 4846 20.640787 0.260926 ms/op 1000000
Radix_Sort·p0.00 sample 1 1 14.893056 ms/op 1000000
Radix_Sort·p0.50 sample 1 1 18.743296 ms/op 1000000
Radix_Sort·p0.90 sample 1 1 26.673152 ms/op 1000000
Radix_Sort·p0.95 sample 1 1 30.724915 ms/op 1000000
Radix_Sort·p0.99 sample 1 1 40.470446 ms/op 1000000
Radix_Sort·p0.999 sample 1 1 63.016600 ms/op 1000000
Radix_Sort·p0.9999 sample 1 1 136.052736 ms/op 1000000
Radix_Sort·p1.00 sample 1 1 136.052736 ms/op 1000000

The table tells an interesting story. Arrays.sort is vastly superior for small arrays (the arrays most people have), but for large arrays the custom implementation comes into its own. Interestingly, this is consistent with the computer science. If you need to sort large arrays of (unsigned) integers and care about performance, think about implementing radix sort.

Microsecond Latency Rules Engine with RoaringBitmap

Implementing a rules engine can shorten development time and remove a lot of tedious if statements from your business logic. Unfortunately they are almost always slow and often bloated. Simple rules engines can be implemented by assigning integer salience to each line in a truth table, with rule resolution treated as an iterative intersection of ordered sets of integers. Implemented in terms of sorted sets, it would be remiss not to consider RoaringBitmap for the engine’s core. The code is at github.

Classification Table and Syntax

This rules engine builds on the simple idea of a truth table usually used to teach predicate logic and computer hardware. Starting with a table and some attributes, interpreting one attribute as a classification, we get a list of rules. It is trivial to load such a table from a database. Since classifications can overlap, we prioritise by putting the rules we care about most – or the most salient rules – at the top of the table. When multiple rules match a fact, we take the last in the set ordered by salience. So we don’t always have to specify all of the attributes to get a classification, we can rank attributes by their importance left to right, where it’s required that all attributes to the left of a specified attribute are also specified when matching a fact against a rule set.

It’s possible to define rules containing wildcards. Wildcard rules will match any query (warning: if these are marked as high salience they will hide more specific rules with lower salience). It’s also possible to specify a prefix with a wildcard, which will match any query that matches at least the prefix.

Below is an example table consisting of rules for classification of regional English accents by phonetic feature.

English Accent Rules

thought cloth lot palm plant bath trap accent
/ɔ/ /ɒ/ /ɑ/ /ɑː/ /ɑː/ /ɑː/

/æ/ Received Pronunciation (UK)
/ɔ/ /ɔ/ /ɑ/ /ɑ/ /æ/ /æ/

/æ/ Georgian (US)
/ɑ/ /ɑ/ /ɑ/ /ɑ/ /æ/ /æ/

/æ/ Canadian
* * /ɑ/ /ɑ/ /æ/ /æ/

/æ/ North American
* * * * * *

/æ/ Non Native
* * * * * *

* French

 

In the example above, the vowel sounds used in words differentiating speakers of several English accents are configured as a classification table. The accent column is the classification of any speaker exhibiting the properties specified in the six leftmost columns. UK Received Pronunciation is the most specific rule and has high salience, whereas various North American accents differ from RP in their use of short A vowels. A catch all for North American accents would wild card the sounds in thought and caught (contrast Boston pronunciations with Texas). So long as trap has been pronounced with a short A (which all English speakers do), and no other rule would recognise the sounds used in the first six words, the rule engine would conclude the speaker is using English as a second language. If not even the word trap is recognisable, then the speaker is probably unintelligible, or could be French.

Implementation

A rule with a given salience can be represented by creating a bitmap index on salience by the attribute values of the rules. For instance, to store the rule {foo, bar} -> 42, with salience 10, create a bitmap index on the first attribute of the rule, and set the 10th bit of the “foo” bitmap; likewise for the “bar” bitmap of the second index. Finding rules which match both attributes is a bitwise intersection, and since we rank by salience, the rule that wins is the first in the set. An obvious choice for fast ordered sets is RoaringBitmap.

RoaringBitmap consists of containers, which are fast, cache-friendly sorted sets of integers, and can contain up to 2^16 shorts. In RoaringBitmap, containers are indexed by keys consisting of the most significant 16 bits of the integer. For a rules engine, if you have more than 2^16 rules you have a much bigger problem anyway, so a container could index all the rules you could ever need, so RoaringBitmap itself would be overkill. While RoaringBitmap indexes containers by shorts (it does so for the sake of compression), we can implement wildcard and prefix matching by associating containers with Strings rather than shorts. As the core data structure of the rules engine, a RoaringBitmap container is placed at each node of an Apache commons PatriciaTrie. It’s really that simple – see the source at github.

When the rules engine is queried, a set consisting of all the rules that match is intersected with the container found at the node in the trie matching the value specified for each attribute. When more than one rule matches, the rule with the highest salience is accessed via the Container.first() method, one of the features I have contributed to RoaringBitmap. See example usage at github.

 

 

A Quick Look at RoaringBitmap

This article is an introduction to the data structures found in the RoaringBitmap library, which I have been making extensive use of recently. I wrote some time ago about the basic idea of bitmap indices, which are used in various databases and search engines, with the caveat that no traditional implementation is optimal across all data scenarios (in terms of size of the data set, sparsity, cardinalities of attributes and global sort orders of data sets with respect to specific attributes). RoaringBitmap is a dynamic data structure which aims to be that one-size-fits-all solution across all scenarios.

Containers

A RoaringBitmap should be thought of as a set of unsigned integers, consisting of containers which cover disjoint subsets. Each subset can contain values from a range of size 2^16, and the subset is indexed by a 16 bit key. This means that in the worst case it only takes 16 bits to represent a single 32 bit value, so unsigned 32 bit integers can be stored as Java shorts. The choice of container size also means that in the worst case, the container will still fit in L1 cache on a modern processor.

The implementation of the container covering a disjoint subset is free to vary between RunContainer, BitmapContainer and ArrayContainer, depending entirely on properties of the subset. When inserting data into a RoaringBitmap, it is decided whether to create a new container, or to mutate an existing container, depending on whether the values fit in the range covered by the container’s key. When performing a set operation, for instance by intersecting two bitmaps or computing their symmetric difference, a new RoaringBitmap is created by performing operations container by container, and it is decided dynamically which container implementation is best suited for the result. For cases where it is too difficult to determine the best implementation automatically, the method runOptimize is available to the programmer to make sure.

When querying a RoaringBitmap, the query can be executed container by container (which incidentally makes the query naturally parallelisable, but it hasn’t been done yet), and each pair from the cartesian product of combinations of container implementations must be implemented separately. This is manageable because there are only three implementations, and there won’t be any more. There is less work to do for symmetric operations, such as union and intersection, than with asymmetric operations such as contains.

RunContainer

When there are lots of clean words in a section of a bitmap, the best choice of container is run length encoding. The implementation of RunContainer is simple and very compact. It consists of an array of shorts (not ints, the most significant 16 bits are in the key) where the values at the even indices are the starts of runs, and the values at the odd indices are the lengths of the respective runs. Membership queries can be implemented simply using a binary search, and quantile queries can be implemented in constant time. Computing container cardinality requires a pass over the entire run array.

ArrayContainer

When data is sparse within a section of the bitmap, the best implementation is an array (short[]).  For very sparse data, this isn’t theoretically optimal, but for most cases it is very good and the array for the container will fit in L1 cache for mechanical sympathy. Cardinality is very fast because it is precomputed, and operations would be fast in spite of their precise implementation by virtue of the small size of the set (that being said, the actual implementations are fast). Often when creating a new container, it is necessary to convert to a bitmap for better compression as the container fills up.

BitmapContainer

BitmapContainer is the classic implementation of a bitset. There is a fixed length long[] which should be interpreted bitwise, and a precomputed cardinality. Operations on BitmapContainers tend to be very fast, despite typically touching each element in the array, because they fit in L1 cache and make extensive use of Java intrinsics. If you find a method name in here and run your JVM on a reasonably modern processor, your code will quickly get optimised by the JVM, sometimes even to a single instruction. A much hackneyed example, explained better by Michael Barker quite some time ago, would be Long.bitCount, which translates to the single instruction popcnt and has various uses when operating on BitmapContainers. When intersecting with another container, the cardinality can only decrease or remain the same, so there is a chance an ArrayContainer will be produced.

Examples

There is a really nice Scala project on github which functions as a DSL for creating RoaringBitmaps – it allows you to create an equality encoded (see my previous bitmap index post) RoaringBitmap in a very fluid way. The project is here.

I have implemented bit slice indices, both equality and range encoded, in a data quality tool I am building. That project is hosted here. Below is an implementation of a range encoded bit slice index as an example of how to work with RoaringBitmaps.


public class RangeEncodedOptBitSliceIndex implements RoaringIndex {

  private final int[] basis;
  private final int[] cumulativeBasis;
  private final RoaringBitmap[][] bitslice;

  public RangeEncodedOptBitSliceIndex(ProjectionIndex projectionIndex, int[] basis) {
    this.basis = basis;
    this.cumulativeBasis = accumulateBasis(basis);
    this.bitslice = BitSlices.createRangeEncodedBitSlice(projectionIndex, basis);
  }

  @Override
  public RoaringBitmap whereEqual(int code, RoaringBitmap existence) {
    RoaringBitmap result = existence.clone();
    int[] expansion = expand(code, cumulativeBasis);
    for(int i = 0; i < cumulativeBasis.length; ++i) {
      int component = expansion[i];
      if(component == 0) {
        result.and(bitslice[i][0]);
      }
      else if(component == basis[i] - 1) {
        result.andNot(bitslice[i][basis[i] - 2]);
      }
      else {
        result.and(FastAggregation.xor(bitslice[i][component], bitslice[i][component - 1]));
      }
    }
    return result;
  }

  @Override
  public RoaringBitmap whereNotEqual(int code, RoaringBitmap existence) {
    RoaringBitmap inequality = existence.clone();
    inequality.andNot(whereEqual(code, existence));
    return inequality;
  }

  @Override
  public RoaringBitmap whereLessThan(int code, RoaringBitmap existence) {
    return whereLessThanOrEqual(code - 1, existence);
  }

  @Override
  public RoaringBitmap whereLessThanOrEqual(int code, RoaringBitmap existence) {
    final int[] expansion = expand(code, cumulativeBasis);
    final int firstIndex = cumulativeBasis.length - 1;
    int component = expansion[firstIndex];
    int threshold = basis[firstIndex] - 1;
    RoaringBitmap result = component < threshold ? bitslice[firstIndex][component].clone() : existence.clone();     for(int i = firstIndex - 1; i >= 0; --i) {
      component = expansion[i];
      threshold = basis[i] - 1;
      if(component != threshold) {
        result.and(bitslice[i][component]);
      }
      if(component != 0) {
        result.or(bitslice[i][component - 1]);
      }
    }
    return result;
  }

  @Override
  public RoaringBitmap whereGreaterThan(int code, RoaringBitmap existence) {
    RoaringBitmap result = existence.clone();
    result.andNot(whereLessThanOrEqual(code, existence));
    return result;
  }

  @Override
  public RoaringBitmap whereGreaterThanOrEqual(int code, RoaringBitmap existence) {
    RoaringBitmap result = existence.clone();
    result.andNot(whereLessThan(code, existence));
    return result;
  }
}

Further Reading

The library has been implemented under an Apache License by several contributors, the most significant contributions coming from computer science researcher Daniel Lemire, who presented RoaringBitmap at Spark Summit 2017. The project site is here and the research paper behind the library is freely available.

How a Bitmap Index Works

Bitmap indices are used in various data technologies for efficient query processing. At a high level, a bitmap index can be thought of as a physical materialisation of a set of predicates over a data set, is naturally columnar and particularly good for multidimensional boolean query processing. PostgreSQL materialises a bitmap index on the fly from query predicates when there are multiple attributes constrained by a query (for instance in a compound where clause). The filter caches in ElasticSearch and Solr are implemented as bitmap indices on filter predicates over document IDs. Pilosa is a distributed bitmap index query engine built in Go, with a Java client library.

Bitmap indices are not a one-size-fits-all data structure, and in degenerate cases can take up more space than the data itself; using a bitmap index in favour of a B-tree variant on a primary key should be considered an abuse. Various flavours of bitmap implementation exist, but the emerging de facto standard is RoaringBitmap led by Daniel Lemire. RoaringBitmap is so ubiquitous that it is handled as a special case by KryoSerializer – no registration required if you want to use Spark to build indices.

Naive Bitmap Index

To introduce the concept, let’s build a naive uncompressed bitmap index. Let’s say you have a data set and some way of assigning an integer index, or record index from now on, to each record (the simplest example would be the line number in a CSV file), and have chosen some attributes to be indexed. For each distinct value of each indexed attribute of your data, compute the set of indices of records matching the predicate. For each attribute, create a map from the attribute values to the sets of corresponding record indices. The format used for the set doesn’t matter yet, but either an int[] or BitSet would be suitable depending on properties of the data and the predicate (cardinality of the data set, sort order of the data set, cardinality of the records matching the predicate, for instance). Using a BitSet to encode the nth record index as the nth bit of the BitSet can be 32x smaller than an int[] in some cases, and can be much larger when there are many distinct values of an attribute, which results in sparse bitmaps.

The tables below demonstrate the data structure. The first table represents a simple data set. The second and third tables represent bitmap indices on the data set, indexing the Country and Sector attributes.

Record Index Country Sector
0 GB Financials
1 DE Manufacturing
2 FR Agriculturals
3 FR Financials
4 GB Energies

The bitmap index consists of the two tables below:

Country
Value Record Indices Bitmap
GB 0,4 0x10001
DE 1 0x10
FR 2,3 0x1100
Sector
Value Record Indices Bitmap
Financials 0,3 0x1001
Manufacturing 1 0x10
Agriculturals 2 0x100
Energies 4 0x10000

It’s worth noting three patterns in the tables above.

  1. The number of bitmaps for an attribute is the attribute’s distinct count.
  2. There are typically runs of zeroes or ones, and the lengths of these runs depend on the sort order of the record index attribute
  3. A bitmap index on the record index attribute itself would be as large as the data itself, and a much less concise representation. Bitmap indices do not compete with B-tree indices for primary key attributes.

Query Evaluation

This simple scheme effectively materialises the result of predicates on the data and is particularly appealing because these predicates can be composed by performing efficient logical operations on the bitmaps. Query evaluation is most efficient when both the number of bitmaps and size of each bitmap are as small as possible. An efficient query plan will touch as few bitmaps as possible, regardless of bitmap size. Here are some examples of queries and their evaluation plans.

Single Attribute Union


select *
from records
where country = "GB" or country = "FR"

  1.  Access the country index, read the bitmaps for values “FR” and “GB”
  2. Apply a bitwise logical OR to get a new bitmap
  3. Access the data stored by record id with the retrieved indices
Multi Attribute Intersection


select *
from records
where country = "GB" and sector = "Energies"

  1. Access the country index, and read the bitmap for value “GB”
  2. Access the sector index, and read the bitmap for value “Energies”.
  3. Apply a bitwise logical AND to get a new bitmap
  4. Access the data stored by record id with the retrieved indices
Single Attribute Except Clause


select *
from records
where country <> "GB"

  1. Access the country index, and read the bitmap for value “GB”
  2. Negate the bitmap
  3. Access the data stored by record id with the retrieved indices

The index lends itself to aggregate queries (and aggregates on predicates)

Count


select country, count(*)
from records
group by country

  1. Access the country index
  2. Iterate over the keys
    • Count the bits in the bitmap, store the count against the key in a map
Count with Filter


select country, count(*)
from records
where sector <> "Financials"
group by country

  1. Access the sector index and read the bitmap for “Financials”
  2. Negate the bitmap, call the negated bitmap without_financials
  3. Access the country index
  4. Iterate over the keys
    • Intersect each bitmap with without_financials
    • Count the bits in the resultant bitmap, store the count against the key in a map

The two main factors affecting the performance of query processing are the number of bitmaps that need to be accessed, and the size of each bitmap (which concerns both memory/disk usage and CPU utilisation) – both should be minimised. Choosing the correct encoding for expected queries (one should expect range queries for dates, but equality and set membership queries for enumerations) can reduce the number of bitmap accesses required to evaluate a query; whereas compression can reduce the bitmap sizes.

Encoding

Only predicates for equality are efficient with the scheme so far. Suppose there is a well defined sort order for an attribute a. In order to evaluate


select *
from records
where a > x and a < y

every bitmap in the range (x, y) would have to be accessed and united. This could easily become a performance bottleneck. The encoding could be adapted for evaluating range predicates. Instead of setting the nth bit if the nth record has a = y (equality encoding), it could be set if the nth record has a \le y (range encoding). In this encoding only one bitmap would need to be accessed to evaluate a predicate like a \le y, rather than the |[a_{min}, y]| bitmaps required using the equality encoding. In order to evaluate a \in [x, y], only the bitmaps for x and y are needed. Not much is lost in order to support equality predicates in a range encoding; only the bitmap for the value and its predecessor are required.

Compression

The scheme presented so far works as a toy model but the bitmaps are just too large to be practical. A bitmap index on a single attribute with m distinct values over a data set consisting of n records, using a BitSet would consume mn bits, using an int[] would consume 32mn bits. Therefore, some kind of compression is required to make the approach feasible.

Often in real world data sets, there are attributes with skewed histograms, a phenomenon known as Zipf’s Law. In a typical financial data set exhibiting this property, most trades will be in very few instruments (EURGBP for instance), and there will be very few trades in the rest of the traded instruments. The bitmaps for the values at both the head and tail of these histograms become less random and therefore compressible. At the head, the bits are mostly set; at the tail mostly unset. Compression seeks to exploit this.

One well understood compression strategy is run length encoding. If there are m bits set in a row, followed by n unset bits and again followed by p bits set, 0x1…10..01..1 of size m + n + p bit could be replaced by m1n0p1 which is typically a lot smaller (though worse if the runs are very short). Since there are only two possible values, only ranges of set bits need to be represented – it is only necessary to store the start index and length of each run, so the bitmap becomes the set of tuples \{(0,m), (m+n, p)\}. Notably the sort order of the record index with respect to the attribute affects the compression ratio for run length encoding because it can make runs longer or shorter.

In reality, run length encoding on bits is not practical since modern processors operate on words not bits. Instead of counting runs of bits, several algorithms count runs of bytes (BBC – Byte-aligned Bitmap Compression) or words (WAH – Word Aligned Hybrid, EWAH – Enhanced Word Aligned Hybrid). These algorithms are faster at the expense of reduced compression. In these schemes compression is improved when there are long runs of clean words (where all the bits in the word are the same), and the compression ratio is degraded when there are many dirty words, which cannot be compressed at all. The number of clean words and hence the compression ratio for a bitmap index depends on the order of the record index attribute. However, an optimal sort order with respect to an index on one attribute will generally be sub-optimal for an index on another.

In order to maintain the attractive boolean query processing capabilities, the OR, AND, XOR, NAND, NOR and NOT operations each need to redefined to operate on the compressed form of the bitmap, and in the case of algorithms like EWAH these adapted operations are more efficient, CPU and cache-friendly, than on naive uncompressed bitmaps.

Previously I was ambivalent between the use of BitSet and int[] to encode the sets of record indices (Set was not proposed because of the inherent cost of wrapped integers). This is because neither of these types is really appropriate for the task in all cases. If we use an uncompressed BitSet we end up with high memory usage for a large data set, even if most of the bits are unset, which is often compressible at the word level. With very sparse data, when most of the bits would be zero, it would take less space to store the record indices in an int[] instead. By choosing dynamically whether to use integers, uncompressed bits, or compressed words is actually roughly how the RoaringBitmap library optimises performance. More about that here.

Reducing the Number of Bitmaps

Query evaluation performance degrades with the number of bitmaps that need to be accessed. Choosing the right encoding for query patterns and reducing the size of each bitmap are both key for performance and practicality, but it can help save storage space to have fewer bitmaps per attribute. So far each value has been encoded as a single bitmap, and each bitmap has been associated with only one value. The total number of bitmaps can be reduced by applying a factorisation on values with a bitmap per factor, so each bitmap will be associated with several values and each value will be associated with several bitmaps. To this end, mapping values into integers by some means would allow integer arithmetic to be used for index construction. A simple mechanism to map a set of objects to integers would be a dictionary, a more complicated but better mechanism might be a perfect hash function like CHM (an order preserving transformation!) or BZW (which trades order preservation for better compression).

Bit Slicing

Supposing a mapping between a set of values and the natural numbers has been chosen and implemented, we can define a basis to factorise each value. The number 571, for example, can be written down as either 5*10^2 + 7*10^1 + 1*10^0 in base-10 or 1*8^3 + 0*8^2 + 7*8^1 + 3*8^0 in base-8. Base-10 uses more coefficients, whereas base-8 uses more digits. Bit slice indexing is analogous to arithmetic expansion of integers, mapping coefficients to sets, or slices, of bitmaps; digits to bitmaps.

Mapping a set of objects S into base n, where \log_n(|S|) \approx \mathcal{O}(m), we can use mn bitmaps to construct the index. The bases do not need to be identical (to work with date buckets we could choose to have four quarters, three months, and 31 days for example) but if they are the bases are said to be uniform. An example of a base 3 uniform index on currencies is below:

Records
Record Index Currency
0 USD
1 GBP
2 GBP
3 EUR
4 CHF
Currency Encoding
Currency Code Base 3 Expansion
USD 0 0*3 + 0
GBP 1 0*3 + 1
EUR 2 0*3 + 2
CHF 3 1*3 + 0
Single Component Currency Index
Currency Bitmap
USD 0x1
GBP 0x110
EUR 0x1000
CHF 0x10000
Bit Sliced Currency Index
Power of 3 Remainder Bitmap
1 0 0x1111
1 1 0x10000
1 2 0x0
0 0 0x10001
0 1 0x110
0 2 0x1000

Here we have actually used six bitmaps instead of four, but the factorisation comes into its own when more currencies are added. With a 2-component base-3 index, we can use six bitmaps to encode up to nine values.

To evaluate a query, map the value into its integer representation, factorise the number with respect to the bases of the index, and then intersect at most m bitmaps together. This is slower than a single bitmap access, but has some useful properties if data is hierarchically bucketed as well as trading query performance for storage space. To evaluate queries at the top level of bucketing, only one bitmap access is required; at the second level, two bitmap accesses are required and so on. So there is a trade off between degraded performance with granular values with increased performance for roll-ups.

Links

Here are some links on the topic that I found interesting

Roaring Bitmap
Bitmap Index Design and Evaluation
Sorting improves word-aligned bitmap indexes
Consistently faster and smaller compressed bitmaps with Roaring