Concise Binary Object Representation

Concise Binary Object Representation (CBOR) defined by RFC 7049 is a binary, typed, self describing serialisation format. In contrast with JSON, it is binary and distinguishes between different sizes of primitive type properly. In contrast with Avro and Protobuf, it is self describing and can be used without a schema. It goes without saying for all binary formats: in cases where data is overwhelmingly numeric, both parsing time and storage size are far superior to JSON. For textual data, payloads are also typically smaller with CBOR.

The Type Byte

The first byte of every value denotes a type. The most significant three bits denote the major type (for instance byte array, unsigned integer). The last five bits of the first byte denote a minor type (float32, int64 and so on.) This is useful for type inference and validation. For instance, if you wanted to save a BLOB into HBase and map that BLOB to a spark SQL Row, you can map the first byte of each field value to a Spark DataType. If you adopt a schema on read approach, you can validate the supplied schema against the type encoding in the CBOR encoded blobs. The major types and some interesting minor types are enumerated below but see the definitions for more information.

  • 0:  unsigned integers
  • 1:  negative integers
  • 2:  byte strings, terminated by 7_31
  • 3:  UTF-8 text, terminated by 7_31
  • 4:  arrays, terminated by 7_31
  • 5:  maps, terminated by 7_31
  • 6:  tags, (0: timestamp strings, 1: unix epoch longs, 2: big integers…)
  • 7:  floating-point numbers, simple ubiquitous values (20: False, 21: True, 22: Null, 23: Undefined, 26: float, 27: double, 31: stop byte for indefinite length fields (maps, arrays etc.))

Usage

In Java, CBOR is supported by Jackson and can be used as if it is JSON. It is available in


<dependency>
    <groupId>com.fasterxml.jackson.dataformat</groupId>
    <artifactId>jackson-dataformat-cbor</artifactId>
    <version>2.8.4</version>
</dependency>

Wherever you would use an ObjectMapper to work with JSON, just use an ObjectMapper with a CBORFactory instead of the default JSONFactory.


ObjectMapper mapper = new ObjectMapper(new CBORFactory());

Jackson integrates CBOR into JAX-RS seamlessly via


<dependency>
    <groupId>com.fasterxml.jackson.jaxrs</groupId>
    <artifactId>jackson-jaxrs-cbor-provider</artifactId>
    <version>2.8.4</version>
</dependency>

If a JacksonCBORProvider is registered in a Jersey ResourceConfig (a one-liner), then any resource method annotated as @Produces("application/cbor"), or any HTTP request with the Accept header set to “application/cbor” will automatically serialise the response as CBOR.

Jackson deviates from the specification slightly by promoting floats to doubles (despite parsing floats properly it post-processes them as doubles), Jackson recognises floats properly as of 2.8.6 and distinguishes between longs and ints correctly so long as CBORGenerator.Feature.WRITE_MINIMAL_INTS is disabled on the writer.

In javascript, cbor.js can be used to deserialise CBOR, though loss of browser native support for parsing is a concern. It would be interesting to see some benchmarks for typical workloads to evaluate the balance of the cost of javascript parsing versus the benefits of reduced server side cost of generation and reduced message size. Again, for large quantities of numeric data this is more likely to be worthwhile than with text.

Comparison with JSON – Message Size

Textual data is slightly smaller when represented as CBOR as opposed to JSON. Given the interoperability that comes with JSON, it is unlikely to be worth using CBOR over JSON for reduced message size.

Large arrays of doubles are a lot smaller in CBOR. Interestingly, large arrays of small integers may actually be smaller as text than as binary; it takes only two bytes to represent 10 as text, whereas it takes four bytes in binary. Outside of the range of -99 to 999 this is no longer true, but might be a worthwhile economy for large quantities of survey results.

JSON and CBOR message sizes for messages containing mostly textual, mostly integral and mostly floating point data are benchmarked for message size at github. The output is as follows:

CBOR, Integers: size=15122B
JSON, Integers: size=6132B
CBOR, Doubles: size=27122B
JSON, Doubles: size=54621B
CBOR, Text: size=88229B
JSON, Text: size=116565B

Comparison with JSON – Read/Write Performance

Using Jackson to benchmark the size of the messages is not really a concern since it implements each specification; the output and therefore size should have been the same no matter which library produced the messages. Measuring read/write performance of a specification is difficult because only the implementation can be measured. It may well be the case that either JSON or CBOR can be read and written faster by another implementation than Jackson (though I expect Jackson is probably the fastest for either format). In any case, measuring Jackson CBOR against Jackson JSON seems fair. I benchmarked JSON vs CBOR writes using the Jackson implementations of each format and JMH. The code for the benchmark is at github

The results are as below. CBOR has significantly higher throughput for both read and write.

Benchmark Mode Count Score Error Units
readDoubleDataCBOR thrpt 5 12.230 ±1.490 ops/ms
readDoubleDataJSON thrpt 5 0.913 ±0.046 ops/ms
readIntDataCBOR thrpt 5 16.033 ±3.185 ops/ms
readIntDataJSON thrpt 5 8.400 ±1.219 ops/ms
readTextDataCBOR thrpt 5 15.736 ±3.729 ops/ms
readTextDataJSON thrpt 5 1.065 ±0.026 ops/ms
writeDoubleDataCBOR thrpt 5 26.222 ±0.779 ops/ms
writeDoubleDataJSON thrpt 5 0.930 ±0.022 ops/ms
writeIntDataCBOR thrpt 5 31.095 ±2.116 ops/ms
writeIntDataJSON thrpt 5 33.512 ±9.088 ops/ms
writeTextDataCBOR thrpt 5 31.338 ±4.519 ops/ms
writeTextDataJSON thrpt 5 1.509 ±0.245 ops/ms
readDoubleDataCBOR avgt 5 0.078 ±0.003 ms/op
readDoubleDataJSON avgt 5 1.123 ±0.108 ms/op
readIntDataCBOR avgt 5 0.062 ±0.008 ms/op
readIntDataJSON avgt 5 0.113 ±0.012 ms/op
readTextDataCBOR avgt 5 0.058 ±0.007 ms/op
readTextDataJSON avgt 5 0.913 ±0.240 ms/op
writeDoubleDataCBOR avgt 5 0.038 ±0.004 ms/op
writeDoubleDataJSON avgt 5 1.100 ±0.059 ms/op
writeIntDataCBOR avgt 5 0.031 ±0.002 ms/op
writeIntDataJSON avgt 5 0.029 ±0.004 ms/op
writeTextDataCBOR avgt 5 0.032 ±0.003 ms/op
writeTextDataJSON avgt 5 0.676 ±0.044 ms/op

The varying performance characteristics of media types/serialisation formats based on the predominant data type in a message make proper HTTP content negotiation important. It cannot be known in advance when writing a server application what the best content type is, and it should be left open to the client to decide.

HBase Connection Management

I have built several web applications recently using Apache HBase as a backend data store. This article addresses some of the design concerns and approaches made in efficiently managing HBase connections.

One of the first things I noticed about the HBase client API was how long it takes to create the connection. HBase connection creation is effectively Zookeeper based service discovery, the end result being a client which knows where all the region servers are, and which region server is serving which key space. This operation is expensive and needs to be minimised.

At first I only created the connection once, when I started the web application. This is very simple and is fine for most use cases.


public static void main(String[] args) throws Exception {
        Configuration configuration = HBaseConfiguration.create();
        Connection connection = ConnectionFactory.createConnection(configuration);
}

This approach is great unless there is the requirement to proxy your end user when querying HBase. If Apache Ranger is enabled on your HBase cluster, proxying your users allows it to apply user specific authorisation to the query, rather than to your web application service user. This poses a few constraints: the most relevant being that you need to create a connection per user so you can’t just connect when you start your application any more.

Proxy Users

I needed to proxy users and minimise connection creation, so I built a connection pool class which, given a user principal, creates a connection as the user. I used Guava’s loading cache to handle cache eviction and concurrency. Guava’s cache also has a very useful eviction listener, which allows the connection to be closed when evicted from the cache.

In order to get the user proxying working, the UserGroupInformation for the web application service principal itself is required (see here), and you need to have successfully authenticated your user (I used SPNego to do this). The Hadoop class UserProvider is then used to create a proxy user. Your web application service principal also needs to be configured as a proxying user in core-site.xml, which you can manage via tools like Ambari.


public class ConnectionPool implements Closeable {

  private static final Logger LOGGER = LoggerFactory.getLogger(ConnectionPool.class);
  private final Configuration configuration;
  private final LoadingCache<String, Connection> cache;
  private final ExecutorService threadPool;
  private final UserProvider userProvider;
  private volatile boolean closed = false;
  private final UserGroupInformation loginUser;

  public ConnectionPool(Configuration configuration, UserGroupInformation loginUser) {
    this.loginUser = loginUser;
    this.configuration = configuration;
    this.userProvider = UserProvider.instantiate(configuration);
    this.threadPool = Executors.newFixedThreadPool(50, new ThreadFactoryBuilder().setNameFormat("hbase-client-connection-pool").build());
    this.cache = createCache();
  }

  public Connection getConnection(Principal principal) throws IOException {
    return cache.getUnchecked(principal.getName());
  }

  @Override
  public void close() throws IOException {
    if(!closed) {
      closed = true;
      cache.invalidateAll();
      cache.cleanUp();
      connectionThreadPool.shutdown();
    }
  }

  private Connection createConnection(String userName) throws IOException {
      UserGroupInformation proxyUserGroupInformation = UserGroupInformation.createProxyUser(userName, loginUser);
      return ConnectionFactory.createConnection(configuration, threadPool, userProvider.create(proxyUserGroupInformation));
  }

  private LoadingCache<String, Connection> createCache() {
    return CacheBuilder.newBuilder()
                       .expireAfterAccess(10, TimeUnit.MINUTES)
            .<String, Connection>removalListener(eviction -> {
              Connection connection = eviction.getValue();
              if(null != connection) {
                try {
                  connection.close();
                } catch (IOException e) {
                  LOGGER.error("Connection could not be closed for user=" + eviction.getKey(), e);
                }
              }
            })
            .maximumSize(100)
            .build(new CacheLoader<String, Connection>() {
              @Override
              public Connection load(String userName) throws Exception {
                LOGGER.info("Create connection for user={}", userName);
                return createConnection(userName);
              }
            });
  }
}

One drawback of this approach is that the user experiences a slow connection the first time they query the server or any time after their connection has been evicted from the cache. They will also observe a lag if you are sharding your application behind a load balancer without sticky sessions. If you use a round robin strategy connection creation costs will be incurred whenever there is a new instance/user combination route.