A primary key for each row of a table in a database is virtually a requirement of database design. Occasionally, the data for a table provides a primary key (e.g. username or email for an account table). More common is that one needs to generate primary key values for a table. Yet, tools for this in MySQL/Java are limited. MySQL offers auto_increment, but there are issues with replication, it can become a bottleneck for insert-heavy tables, it doesn’t provide globally unique ids and displaying these ids publicly may expose sensitive information. Java offers java.util.uuid, which gives pseudo-random 128-bit values. The chance of a collision is minuscule, but non-zero. More troubling is the size of the string representation: 36 characters. Since InnoDB uses the primary key index as the storage structure for the data and uses primary keys as data pointers for secondary indexes, long keys not only waste space, but make the database less efficient.
After evaluating these options and a few ideas of our own for primary key generation, we settled on a simple algorithm motivated by group theory. The advantages of this algorithm are numerous:
- Short Keys (6 characters yield 57 billion unique keys using only alphanumeric characters)
- Universal Uniqueness (no guessing to which table a key value refers)
- Pseudo-randomness (keys don’t follow an obvious pattern)
- No Duplicate-Checking (keys are guaranteed to be unique until a limit is reached)
- Block Generation (keys are generated in blocks to minimize lock contention)
Our generator uses one tiny bit of group theory: if k and n are coprime (aka relatively prime), the sequence of numbers generating by successively adding k (mod n) will not repeat through the first n values. This leads to the following algorithm for generating unique keys:
- Pick a size n
- Pick a value k which is coprime with n
- To generate the next key: nextKey = (lastKey + k) % n
You’ll be guaranteed to not see duplicates until you’ve generated n keys. The sequence you’d see with n=5 and k=3 is { 0, 3, 1, 4, 2, 0, 3, … }.
Note that the choices of n and k are quite important—they must be fixed and can never change. However, selecting reasonable values is not difficult. For n, select a character set and string length, then set n to be the number of possible unique strings. To get the 57 billion value above, use a string length of 6 and a character set of [0-9a-zA-Z] (62 characters). 57 billion is simply the number of unique, 6 character alphanumeric strings (62^6). If you grow to the point that you are worried about key collisions, switch to using 7 character strings (where n=62^7, appx. 3.5 trillion). Note that conversion from the key number value to string value is simply a conversion from base 10 to base 62 (or whatever # of characters you are using).
For k, we need a value that is coprime with n. To achieve pseudo-randomness, k should also not be too small (the same order as n is a good choice). Note that this “randomness” is quite weak in a mathematical sense, but was sufficient for our purposes. One way to select such a k is to multiply together prime numbers larger than the character set size. For our example, a reasonable choice would be k=67*71*73*79*83*89. If you don’t have your own prime number generator, consult the bear.
To put this algorithm into practice, one needs to ensure that keys are generated serially. We did this by creating a table with a single row with a single column storing the last key value. When we want to generate a key (or block of keys), we start a SERIALIZABLE transaction, read the last key value, generate key(s) per the above algorithm, then write back the last key value we generated and close the transaction. To minimize contention and since next key computation is much faster than a transaction, we generate keys in blocks and serve them out of memory via a synchronized HashMap. This causes key values to occasionally be permanently lost when a webapp is shut down, but the lossage is too small to be of any real concern.
We’ve been using this system for many months now and have yet to run into any problems. It satisfies all of our current needs and has the advantage that it can easily scale either by using longer character strings or increasing the key generation block size. Furthermore, it seems to be extremely lightweight, exerting minimal pressure on our database. We would love to hear what other solutions for primary key generation are used. How does ours compare?