Beware Collection Factory Methods

I saw an interesting tweet referencing a Github issue where the impact of including an (in my view) unnecessary implementation of the List interface impacted inlining decisions, causing 20x degradation in throughput. Guava’s ImmutableList is my favourite class to seek and destroy because of the way it is often used – it tends to be associated with unnecessary copying where encapsulation would be a better solution. I had assumed performance gains won from finding and deleting all the instances of ImmutableList had been thanks to relieving the garbage collector from medieval torture. The performance degradation observed in the benchmark is caused by use of ImmutableList, along with all its subclasses, alongside ArrayList, making calls to List bimorphic at best, causing the JIT compiler to generate slower code. I may have inadvertently profited from better inlining in the past simply by removing as many ImmutableLists as possible!

This post doesn’t go into any details about the various mechanisms of method dispatch, and if you want to understand the impact of polymorphism on inlining, bookmark Aleksey Shipilev’s authoritative post and read it when you have some time to really concentrate.

Without resorting to using LinkedList, is it possible to contrive cases in Java 9 where performance is severely degraded by usages of Collections.unmodifiableList and List.of factory methods? Along with ArrayList, these are random access data structures so this should highlight the potential performance gains inlining can give.

The methodology is very simple: I randomly vary the List implementation and plug it into the same algorithm. It is cruder than you would see in Aleksey Shipilev’s post because I’ve targeted only the worst case by creating equal bias between implementations. Aleksey demonstrates that inlining decisions are statistical and opportunistic (the JIT can guess and later deoptimise), and if 90% of your call sites dispatch to the same implementation, it doesn’t matter as much as when the choice is made uniformly. It will vary from application to application, but it could easily be as bad as the case I present if List is used polymorphically.

I created five benchmarks which produce the same number, the same way. Three of these benchmarks only ever call into a single implementation of List and will be inlined monomorphically, to avoid bias, the result is XOR’d with a call to ThreadLocalRandom.current().nextInt() because the other benchmarks need this. One benchmark only ever calls into List.of and ArrayList, then one benchmark randomly chooses a list for each invocation. The difference is stark. You can really screw up performance by making the methods on List megamorphic.

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit
sumLength_ArrayList thrpt 1 10 55.785270 3.218552 ops/us
sumLength_Factory thrpt 1 10 58.565918 2.852415 ops/us
sumLength_Random2 thrpt 1 10 35.842255 0.684658 ops/us
sumLength_Random3 thrpt 1 10 11.177564 0.080164 ops/us
sumLength_Unmodifiable thrpt 1 10 51.776108 3.751297 ops/us


@State(Scope.Thread)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
public class MegamorphicList {

    private List<String>[] strings;

    @Setup(Level.Trial)
    public void init() {
        strings = new List[]{getArrayList(6), getFactoryList6(), getUnModifiableList(6)};
    }

    @Benchmark
    public int sumLength_ArrayList(Blackhole bh) {
        List<String> list = strings[0];
        int blackhole = 0;
        for (int i = 0; i < list.size(); ++i) {
            blackhole += list.get(i).length();
        }
        return blackhole ^ ThreadLocalRandom.current().nextInt(3);
    }

    @Benchmark
    public int sumLength_Factory() {
        List<String> list = strings[1];
        int blackhole = 0;
        for (int i = 0; i < list.size(); ++i) {
            blackhole += list.get(i).length();
        }
        return blackhole ^ ThreadLocalRandom.current().nextInt(3);
    }

    @Benchmark
    public int sumLength_Unmodifiable() {
        List<String> list = strings[2];
        int blackhole = 0;
        for (int i = 0; i < list.size(); ++i) {
            blackhole += list.get(i).length();
        }
        return blackhole ^ ThreadLocalRandom.current().nextInt(3);
    }

    @Benchmark
    public int sumLength_Random2() {
        List<String> list = strings[ThreadLocalRandom.current().nextInt(2)];
        int blackhole = 0;
        for (int i = 0; i < list.size(); ++i) {
            blackhole += list.get(i).length();
        }
        return blackhole;
    }

    @Benchmark
    public int sumLength_Random3() {
        List<String> list = strings[ThreadLocalRandom.current().nextInt(3)];
        int blackhole = 0;
        for (int i = 0; i < list.size(); ++i) {
            blackhole += list.get(i).length();
        }
        return blackhole;
    }

    private List<String> getUnModifiableList(int size) {
        return Collections.unmodifiableList(getArrayList(size));
    }

    private List<String> getFactoryList6() {
        return List.of(randomString(),
                       randomString(),
                       randomString(),
                       randomString(),
                       randomString(),
                       randomString()
                );
    }

    private List<String> getArrayList(int size) {
        List<String> list = new ArrayList<>();
        for (int i = 0; i < size; ++i) {
            list.add(randomString());
        }
        return list;
    }

    private String randomString() {
        return new String(DataUtil.createByteArray(ThreadLocalRandom.current().nextInt(10, 20)));
    }

}

Since writing this post, I have been challenged on whether this result is due to failure to inline or not. This can be easily verified by setting the following JVM arguments to print compilation:

-XX:+PrintCompilation -XX:+UnlockDiagnosticVMOptions -XX:+PrintInlining

You will see the ArrayList and ListN get inlined quickly in isolation:

\-> TypeProfile (19810/19810 counts) = java/util/ArrayList
@ 27   java.util.ArrayList::get (15 bytes)   inline (hot)
...
\-> TypeProfile (363174/363174 counts) = java/util/ImmutableCollections$ListN
@ 24   java.util.ImmutableCollections$ListN::get (17 bytes)   inline (hot)

However, the call remains virtual (and not inlined) when three or more implementations are present:

@ 30   java.util.List::get (0 bytes)   virtual call

I didn’t even bother to use factory methods with different arity, because three is the magic number. Syntactic sugar is nice, but use them with caution.

Bit Twiddling Blog Posts

This is a list of blog posts to read if you like bit twiddling and some basic maths. I would like to improve this list over time – if you have a post similar to these please provide a link in the comments.

  1. Table of basic reversible integer operations – Marc B. Reynolds looks at several bijective functions on the integers modulo 32.
  2. Modular multiplicative inverses – multiplication of integers modulo 32 is invertible for odd integers. Daniel Lemire derives this property and constructs an efficient algorithm to compute the modular inverse of an odd integer.
  3. Carryless multiplicative inverse – A guy called Harold demonstrates an algorithm for computing the carryless inverse of integers modulo 32.
  4. Visualizing Addition, Subtraction and Complement – A spatial interpretation of the addition, subtraction and complement operations on bitwise integers. Another Harold special.
  5. XOR rotates and their inverses – Marc B. Reynolds looks at XOR rotates, their inverses and provides a few visualisations.
  6. Is XOR distributive over addition – a simple proof by contradiction that XOR does not distribute over addition.
  7. Parallel prefix/suffix operations – more insight into compression operations from Harold.
  8. Morton Codes and bit interleaving – Jeroen Baert discusses implementations of Z-curves through bit interleaving.
  9. Tesseral Arithmetic – cracking open Morton encoded coordinates to perform arithmetic adds overhead and reduces throughput. Harold discusses and implements arithmetic without leaving the Morton encoding domain.
  10. Hyperplanes and Wildcards David Eppstein interprets bit strings as the vertices of hypercubes, and develops a wildcard concept to represent unspecified dimensions. This is very similar in concept to a rule engine I wrote recently.

Is XOR Distributive over Addition?

Google search console has become a thing of mild interest to me since I moved my website and Google forgot about my content. Impressions – search terms that match your site but don’t lead to a click – are full of fascinating false positives. I looked at some of my impressions. These search terms are:

is xor associative over addition?

The highlighted term “is xor distributive over addition” jumped out at me.

Since multiplication obviously does distribute over addition (ignoring overflow), it’s perhaps a reasonable question to ask. To disprove this proposition, it is enough to find a single counterexample (not hard, and much quicker than a google search) but it’s more interesting to find a constructive class of counterexamples. I thought of a few strategies to disprove this, other than picking random numbers and checking, that I thought were worth writing down.

Tangentially, on the topic of Google relevance, this search term had nothing to do with this blog until this post, but when I search for topics I think my posts are related to, I can’t find them. I expect not to be seeing “is xor distributive over addition” in the search console in future.

Complement Argument

Would XOR distribute over the addition of a number and its logical complement? We would have that y ^ (x + ~x) = y ^ x + y ^ ~x for any y. Then we have y ^ -1 = ~y = y ^ x + y ^ ~x. So based on the assumption of distributivity, y ^ x + y ^ ~x must have none of the bits from y. We have a contradiction because it is clear that all of the bits in y ^ x + y ^ ~x are set.

Left Shift Argument

Addition causes digits to carry left when the bits in the same position are both set, so x + x is equivalent to x << 1 (ignoring overflow). If, for any integer y, we had that y ^ (x + x) = y ^ x + y ^ x we can find constraints on this identity by considering either the rightmost unset bit or the leftmost set bit of x in isolation. Considering the rightmost set bit at position p: this bit is set on the LHS of this identity if and only if it is unset in y. On the RHS, it is set iff its leftmost neighbour at position p-1 is unset in y, because we shift y ^ x to the left. So we can construct counterexamples whenever p and p-1 differ, and the proposition is not generally true.

Incidental Similarity

I recently saw an interesting class, BitVector, in Apache Arrow, which represents a column of bits, providing minimal or zero copy distribution. The implementation is similar to a bitset but backed by a byte[] rather than a long[]. Given the coincidental similarity in implementation, it’s tempting to look at this, extend its interface and try to use it as a general purpose, distributed bitset. Could this work? Why not just implement some extra methods? Fork it on Github!

This post details the caveats of trying to adapt an abstraction beyond its intended purpose; in a scenario where generic bitset capabilities are added to BitVector without due consideration, examined through the lens of performance. This runs into the observable effect of word widening on throughput, given the constraints imposed by JLS 15.22. In the end, the only remedy is to use a long[], sacrificing the original zero copy design goal. I hope this is a fairly self-contained example of how uncontrolled adaptation can be hostile to the original design goals: having the source code isn’t enough reason to modify it.

Checking bits

How fast is it to check if the bit at index i is set or not? BitVector implements this functionality, and was designed for it. This can be measured by JMH by generating a random long[] and creating a byte[] 8x longer with identical bits. The throughput of checking the value of the bit at random indices can be measured. It turns out that if all you want to do is access bits, byte[] isn’t such a bad choice, and if those bytes are coming directly from the network, it could even be a great choice. I ran the benchmark below and saw that the two operations are similar (within measurement error).


@OutputTimeUnit(TimeUnit.MICROSECONDS)
@State(Scope.Thread)
public class BitSet {

    @Param({"1024", "2048", "4096", "8192"})
    int size;

    private long[] leftLongs;
    private long[] rightLongs;
    private long[] differenceLongs;
    private byte[] leftBytes;
    private byte[] rightBytes;
    private byte[] differenceBytes;

    @Setup(Level.Trial)
    public void init() {
        this.leftLongs = createLongArray(size);
        this.rightLongs = createLongArray(size);
        this.differenceLongs = new long[size];
        this.leftBytes = makeBytesFromLongs(leftLongs);
        this.rightBytes = makeBytesFromLongs(rightLongs);
        this.differenceBytes = new byte[size * 8];
    }

    @Benchmark
    public boolean CheckBit_LongArray() {
        int index = index();
        return (leftLongs[index >>> 6] & (1L << index)) != 0;
    }

    @Benchmark
    public boolean CheckBit_ByteArray() {
        int index = index();
        return ((leftBytes[index >>> 3] & 0xFF) & (1 << (index & 7))) != 0;
    }

    private int index() {
        return ThreadLocalRandom.current().nextInt(size * 64);
    }

    private static byte[] makeBytesFromLongs(long[] array) {
        byte[] bytes = new byte[8 * array.length];
        for (int i = 0; i < array.length; ++i) {
            long word = array[i];
            bytes[8 * i + 7] = (byte) word;
            bytes[8 * i + 6] = (byte) (word >>> 8);
            bytes[8 * i + 5] = (byte) (word >>> 16);
            bytes[8 * i + 4] = (byte) (word >>> 24);
            bytes[8 * i + 3] = (byte) (word >>> 32);
            bytes[8 * i + 2] = (byte) (word >>> 40);
            bytes[8 * i + 1] = (byte) (word >>> 48);
            bytes[8 * i]     = (byte) (word >>> 56);
        }
        return bytes;
    }
}

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
CheckBit_ByteArray thrpt 1 10 174.421170 1.583275 ops/us 1024
CheckBit_ByteArray thrpt 1 10 173.938408 1.445796 ops/us 2048
CheckBit_ByteArray thrpt 1 10 172.522190 0.815596 ops/us 4096
CheckBit_ByteArray thrpt 1 10 167.550530 1.677091 ops/us 8192
CheckBit_LongArray thrpt 1 10 171.639695 0.934494 ops/us 1024
CheckBit_LongArray thrpt 1 10 169.703960 2.427244 ops/us 2048
CheckBit_LongArray thrpt 1 10 169.333360 1.649654 ops/us 4096
CheckBit_LongArray thrpt 1 10 166.518375 0.815433 ops/us 8192

To support this functionality, there’s no reason to choose either way, and it must be very appealing to use bytes as they are delivered from the network, avoiding copying costs. Given that for a database column, this is the only operation needed, and Apache Arrow has a stated aim to copy data as little as possible, this seems like quite a good decision.

Logical Conjugations

But what happens if you try to add a logical operation to BitVector, such as an XOR? We need to handle the fact that bytes are signed and their sign bit must be preserved in promotion, according to the JLS. This would break the bitset, so extra operations are required to keep the 8th bit in its right place. With the widening and its associated workarounds, suddenly the byte[] is a much poorer choice than a long[], and it shows in benchmarks.


    @Benchmark
    public void Difference_ByteArray(Blackhole bh) {
        for (int i = 0; i < leftBytes.length && i < rightBytes.length; ++i) {
            differenceBytes[i] = (byte)((leftBytes[i] & 0xFF) ^ (rightBytes[i] & 0xFF));
        }
        bh.consume(differenceBytes);
    }

    @Benchmark
    public void Difference_LongArray(Blackhole bh) {
        for (int i = 0; i < leftLongs.length && i < rightLongs.length; ++i) {
            differenceLongs[i] = leftLongs[i] ^ rightLongs[i];
        }
        bh.consume(differenceLongs);
    }

Benchmark Mode Threads Samples Score Score Error (99.9%) Unit Param: size
Difference_ByteArray thrpt 1 10 0.805872 0.038644 ops/us 1024
Difference_ByteArray thrpt 1 10 0.391705 0.017453 ops/us 2048
Difference_ByteArray thrpt 1 10 0.190102 0.008580 ops/us 4096
Difference_ByteArray thrpt 1 10 0.169104 0.015086 ops/us 8192
Difference_LongArray thrpt 1 10 2.450659 0.094590 ops/us 1024
Difference_LongArray thrpt 1 10 1.047330 0.016898 ops/us 2048
Difference_LongArray thrpt 1 10 0.546286 0.014211 ops/us 4096
Difference_LongArray thrpt 1 10 0.277378 0.015663 ops/us 8192

This is a fairly crazy slow down. Why? You need to look at the assembly generated in each case. For long[] it’s demonstrable that logical operations do vectorise. The JLS, specifically section 15.22, doesn’t really give the byte[] implementation a chance. It states that for logical operations, sub dword primitive types must be promoted or widened before the operation. This means that if one were to try to implement this operation with, say AVX2, using 256 bit ymmwords each consisting of 16 bytes, then each ymmword would have to be inflated by a factor of four: it gets complicated quickly, given this constraint. Despite that complexity, I was surprised to see that C2 does use 128 bit xmmwords, but it’s not as fast as using the full 256 bit registers available. This can be seen by printing out the emitted assembly like normal.

movsxd  r10,ebx     

vmovq   xmm2,mmword ptr [rsi+r10+10h]

vpxor   xmm2,xmm2,xmmword ptr [r8+r10+10h]

vmovq   mmword ptr [rax+r10+10h],xmm2