Skip to content

Reading and writing are less symmetric than you (probably) think

I am talking about the I/O operations as used in computing here. A typical example of how this kind of thing is exposed are the POSIX syscalls read(2) and write(2), which have the following C function prototypes:

ssize_t read(int fd, void *buf, size_t count);
ssize_t write(int fd, const void *buf, size_t count); 

Now these are raw system calls; user programs can use them directly, but they usually don’t. They normally go through some buffered IO layer; in the C standard library, this means FILE* and functions fread and fwrite, which split count into a product of two values in a vestigial nod to record-based IO but are otherwise equivalent. For concreteness, suppose we’re interfacing with actual storage (i.e. not a pipe, socket, virtual filesystem etc.). Then conceptually, a “read”-class operation (like read or fread) grabs bytes from a file say on a disk somewhere and puts them into the specific memory buffer, and a “write”-class operation takes bytes in a memory buffer and writes them to the disk. Which definitely sounds nice and symmetric—but there’s some important behavioral asymmetries between them, especially when errors are in the mix. The reasons have to do with buffering.

Buffered I/O

In general, file I/O operations in your program will not go directly to a storage device; data instead makes its way through several buffering layers (most of which can be disabled using various flags, but in normal usage these layers are on). These layers are there for good reason: on the kernel side, there’s what’s traditionally called the “buffer cache”. Storage devices are “block devices”, which means they store data in blocks. The block size depends on the device; on old hard disks it used to be 512 bytes, CDs, DVDs etc. tend to use 2k blocks, newer storage devices are now on 4k blocks. Block devices only read entire blocks at a time; that means random byte-aligned IO requests such as “read 100 bytes from disk at byte offset 1234567” or “write 2000 bytes to location 987654” can’t be directly passed to the device at all. The buffer cache is used to translate these requests into block-aligned read and write operations that the device understands; non-block-aligned writes also require reading the previous contents of the block that are not overwritten, and those go in the buffer cache as well. And of course, as the name suggests, it acts as a cache.

On the user-space side, we also have buffers, albeit for a different reason: read and write are system calls, and as such incur a transition to kernel space and back. They also need to check for and report errors every time they are invoked. And of course they actually need to do the work we want them to do – copy the data from (read) or to (write) the buffer cache. System call overhead varies between OSes, but it’s safe to assume that the whole process takes at least a couple hundred clock cycles in the best case. So for the overhead not to completely dominate the actual work being done, you generally want to be reading or writing at least a few kilobytes at a time. For scale reference, typical IO buffer sizes as of this writing are 4096 bytes (e.g. Visual C++ 2013 FILE*, Go bufio.Reader/bufio.Writer) or 8192 bytes (e.g. GNU libc FILE*, Java BufferedReader/BufferedWriter).

Often there are more buffers too. For example, most hard drives and RAID controllers have their own caches, and it is not uncommon for user-space code to have several layers of buffering for various reasons. But this is enough to illustrate the basic structure.

All of these buffers are used in much the same way for reading and writing. So where’s the behavioral asymmetry between reading and writing that I’m talking about? You need to think about the state of the world (so to speak) after you call a read-type call and how it differs from the state of the world after a write-type call.

What happens when you issue an IO operation

Let’s look at what goes into servicing a read-type call first: say you open a C FILE* and want to read the first 100 bytes via fread. The C standard I/O library notices that its buffers are currently empty, and tries to fill them up, issuing a read system call to read say 4k worth of data. The kernel in turn asks the file system where the data for the first 4k of the file is located, checks the buffer cache to see if it already has a copy in memory, and if not, it issues a block read command to the storage device. Either way, the kernel makes sure to get those 4k of data into the buffer cache and from there copies them into the standard IO buffers in user-space memory, then returns. The standard IO library looks at the result of the system call, updates the state of its IO buffers, and then copies the 100 requested bytes into the memory buffer the app supplied.

And what if anything goes wrong? Say the file is smaller than 100 bytes, or there was an error reading from disk, or the file is on a network file system that’s currently inaccessible. Well, if that happens, we catch it too: if something goes wrong filling up the buffer cache, the kernel notices and returns an appropriate error to the I/O library, which can in turn pass errors on to the app. Either way, anything that can go wrong will go wrong before the fread call returns. All the intervening layers need to do is make sure to keep the error information around so it can be passed to the app at the appropriate time.

Now let’s go the other way round: let’s open a fresh new file with a 4k write buffer[1] and issue a 100-byte fwrite. This time, the IO library copies the 100 bytes from the app buffer to the write buffer… and immediately returns, reporting success. The underlying write system call will not be executed until either the buffer fills up or is flushed as a result of calling fflush, fseek, fclose or similar.

Quick imaginary show of hands: who reading this habitually checks return codes of fread or fwrite at all? Of those saying “yes”, who also remembers to check return codes of fflush, fseek or fclose? Probably not a lot. Well, if you don’t, you’re not actually checking whether your writes succeeded at all. And while these remarks are C-specific, this general pattern holds for all buffered writer implementations. Buffered writing delays making the actual write system call; that’s kind of the point. But it implies that error reporting is delayed too!

More buffers

This type of problem is not restricted to user-space buffering either. The implementation of write itself has similar issues: generally, after a successful write call, your data made it to the buffer cache, but it hasn’t hit actual storage yet. The kernel will make its best effort to write that data to storage eventually (hopefully within the next few seconds), but if there’s a device error or a system crash, that data could still be lost. Both of these are relatively rare these days, so we don’t worry about them too much, right? Except for those of us who do.

Oh, and while write will go to some lengths to make sure there are no nasty surprises when writing to local filesystems (for example, even with delayed write-back, you want to make sure to reserve free space on the disk early[2], lest you run out during write-back), at least on POSIX systems there can still be write errors that you only get notified about on close, especially when network filesystems such as NFS or SMB/CIFS are in play (I’m not aware of any such late-reported error conditions on Windows, but that doesn’t mean there aren’t any). Something to be aware of: if you’re using these system calls and are not checking the return code of close, you might be missing errors.

Which brings up another point: even on local file systems, you only have the guarantee that the data made it to the buffer cache. It hasn’t necessarily made it to the storage device yet! If you want that (for example, you’ve just finished writing some important data and want to make sure it actually made it all the way), you need to call fsync[3] on the file descriptor before you close it. The Windows equivalent is FlushFileBuffers.

So, if you make sure to check error codes on every write, and you fsync before you close (again checking errors), that means that once you’ve done all that, you’re safe and the data has successfully made it to permanent storage, right?

Well, two final wrinkles. First, RAID controllers and storage devices themselves have caches too. They’re supposed to have enough capacitors so that if the system suddenly loses power, they still have sufficient power to actually get your data written safely. Hopefully that’s actually true. Good luck. Second, the data may have made it to storage, but that doesn’t necessarily mean it’s actually visible, because the metadata necessary to reach it might not have been written yet. Quoting the Linux man page on fsync(2):

Calling fsync() does not necessarily ensure that the entry in the directory containing the file has also reached disk. For that an explicit fsync() on a file descriptor for the directory is also needed.

For better or for worse, I can’t recall ever seeing code doing this in the wild, though. I’m honestly not sure what the actual guarantees are that popular Linux file systems provide about these things. If you’re handling really really important data, you should probably find out.

Conclusion and summary

Buffering on the read side is great and pretty much transparent because if anything goes wrong, it will go wrong before you ever get to see the data, and you’ll get a proper error code.

Buffering on the write side is much trickier because it delays actual writing and error reporting in ways that most programmers are supposed to be aware of, but usually aren’t. Few are aware of the actual steps necessary to ensure that data made it to storage safely, and some of the OS abstractions involved don’t exactly make things easier (see the fsync quote above). Here be dragons.


[1] Full buffering not line buffering mode, in case anyone’s feeling nit-picky.
[2] Actual block allocation—as in, selecting which physical location on the device file writes will end up—is often delayed in modern file systems, to make it easier to assign mostly-contiguous space to large files where possible. However, even with delayed allocation, you want to keep track of how much space is going to be available on the device once all currently pending writes complete, so that you can return “out of disk space” errors appropriately instead of discovering that you’re out of space 10 seconds after the user exited the app he was using to edit and save his Important Document. Because that would be bad. This sounds as though it’s just a matter of accounting, but it gets tricky with file systems that use extents and not bitmap-based block allocation: getting the last few discontinuous blocks on the device means that you might need extra space to store the file extents! All of which is to say: this stuff is tricky to get right.
[3] Yes, the name looks like it’s part of the C library buffered IO package, but it’s a proper syscall.

Intervals in modular arithmetic

I wrote about regular interval overlap checking before. Let’s consider a somewhat trickier case, where our intervals are not defined over the real numbers or regular integers, but are instead subsets of the integers modulo N (for some fixed N). In this post, I’ll just consider the math; but this is something I’ve used in several places, so I expect I’ll be writing about some concrete uses eventually. Throughout this document, I’ll be writing \mathbb{Z}_N for the set \mathbb{Z} / N\mathbb{Z} of integers mod N (feel free to ignore the notation if you’re not familiar with it).

Intervals mod N, first attempt

The first thing we need to do is agree on what we mean by an interval mod N. Suppose that N=16. In that setting, the meaning of the interval [5,7] is pretty clear: the set {5, 6, 7}. And we can also take something like the integer interval [14,17] representing the set {14, 15, 16, 17} and reduce it mod N to get {14, 15, 0, 1}. But doing this required us to use values outside of {0, …, N-1} to specify the end points. This is often a useful model – I talk about some advantages when doing this for ring buffers in this old post – but it’s a bit of a cheat; we’d like a way to specify intervals mod N that don’t require us to leave the set of integers mod N to handle the “wraparound” case.

This means that we want to define something like the interval [14,1]. For the regular integers, this would be the empty set; but mod N, it makes more sense to properly wrap around (for example, when you say “the trains don’t run from 11:30pm to 5:30am”, it’s understood that you mean 5:30am the next day, not an empty interval). One way to do this is by handling the wrap-around case separately. Assuming a, b are reduced mod N, we could for example define:

[a,b] \mod N := \begin{cases}  \{ a, a + 1, \hdots, b \} & \mbox{if } a \le b \\  \{ a, a + 1, \hdots, N - 1 \} \cup \{ 0, 1, \hdots, b \} & \mbox{if } a > b  \end{cases}

This works, but it’s messy, and chopping up our single connected interval mod N into two pieces for the wrap-around case feels like it’s missing something fundamental about arithmetic mod N. We’re thinking in terms of a number line:

Interval [14,1] on the number line

But the integers mod N aren’t very “line-like” at all, and they’re commonly (and more appropriately) drawn as a circle (like a clock, with 0 at the top and numbers increasing clockwise, to maximize familiarity—apologies to all mathematicians who expect positive angles to move counter-clockwise).

Interval [14,1] on a circle

And in this visualization, there is absolutely nothing special about our interval [14,1]. We happen to pass through 0, but 0 isn’t actually special. It’s just another point on the circle. The reason 0 becomes special when we think in terms of regular integers (and hence a number line) is that it’s usually the place where we decide to cut open the circle so we can flatten it into a line. But that has nothing to do with the circle (or the integers mod N); it’s an incidental artifact of the representation we’re picking.

A different approach

The key problem here is that intervals are normally defined in terms of ordering relationships. For example, real-number intervals are usually defined as sets

[a,b] = \{ x \in \mathbb{R} | a \le x \le b \}.

based on a total order “≤”. But in our circle, we don’t have a useful ordering relation. When you write “4 < 7”, this means that 4 is to the left of 7 on the real number line. If you start at 4 and keep walking right, you’re gonna come by 7 eventually. If you instead were to walk left, the numbers would just keep getting smaller indefinitely.

On our circle mod N, this is not true. If you start at 4 and keep walking in the positive direction (clockwise in our case), you’ll reach 7 after three steps. If you instead walk counterclockwise from 4, you still reach 7 – this time after taking the long way round, in thirteen steps. This is true for any pair of numbers on the circle, which after all represent congruence classes of integers mod N (if you don’t know the terminology, just ignore the rest of this paragraph). It makes sense to say that the integer 4 is less than the integer 7, but the set \bar{4} = 4 + N\mathbb{Z} = \{ 4 + kN | k \in \mathbb{Z} \} is not in any useful sense “less” or “greater” than the set \bar{7} = 7 + N\mathbb{Z}. Viewed in that light, the definition for [a,b] mod N given above is outright weird; the case distinction between “a ≤ b” and “a > b” is testing a condition that doesn’t really make sense in terms of the concepts being expressed.

So ordering relationships are on shaky footing. We can, however, usefully talk about distances. For example, “4” and “7” are definitely three steps apart when we take the shortest path, also three steps apart when we’re only allowed to move clockwise, and thirteen steps apart when we’re only allowed to move counterclockwise. In general, we can define distance functions for the “increasing” (clockwise), “decreasing” (counter-clockwise) and shortest-path distances between two points (we won’t actually be using that latter one, I just mention it for completeness):

d^{\,+}(a,b) := (b-a) \bmod N
d^{\,-}(a,b) := (a-b) \bmod N
d(a, b) := \min(d^{\,+}(a,b), d^{\,-}(a,b))

These distance functions are, technically speaking, mappings d : \mathbb{Z}_N \times \mathbb{Z}_N \rightarrow \mathbb{N}_0 from the integers mod N to the natural numbers (hence the explicit use of mod as a binary operation). The distances are regular non-negative integers; while points in the integers mod N can’t be meaningfully compared, distances can. That’s why they’re useful to us. Also note that the “mod N” here is the modulus under Euclidean/floor division – that is, a non-negative value smaller than N.

Most programming languages use truncating division instead, which means the modulus has absolute value less than N but might be negative; you need to consider this when turning any of the equations in here into code!

Given all this, let’s go back to our problem of defining intervals nicely. Well, how do we draw something like the interval [14,1] on our circle? We just move the pen to “14” and start drawing a clockwise arc until we hit the “1”. We can use that exact idea to define intervals: namely, a point x is inside the interval [a,b] if, starting from a and walking in increasing order (clockwise), we hit x before we leave the interval. That leads to our improved definition of a closed interval mod N:

[a,b] \pmod N := \{ x \in \mathbb{Z}_N | \;d^{\,+}(a,x) \le d^{\,+}(a,b) \}
or the equivalent
[a,b] \pmod N := \{ x \in \mathbb{Z}_N | \;d^{\,-}(b,x) \le d^{\,-}(b,a) \}

and the generalizations to half-open intervals are straightforward:

[a,b) \pmod N := \{ x \in \mathbb{Z}_N | \;d^{\,+}(a,x) < d^{\,+}(a,b) \}
(a,b] \pmod N := \{ x \in \mathbb{Z}_N | \;d^{\,-}(b,x) < d^{\,-}(b,a) \}

Things are simpler if we always start measuring from the closed (inclusive) end, so that’s what we do. I’ll drop the (mod N) for the rest of the article; we know that’s our setting.

Point-in-interval tests and symmetry

This definition can be turned into code immediately and leads to fairly elegant point-in-interval tests that don’t break down into multiple cases:

// modN(x) is assumed to calculate Euclidean (=non-negative) x % N.

// x in [a,b] (mod N)?
static bool point_in_closed_interval(int x, int a, int b)
    return modN(x - a) <= modN(b - a);

// x in [a,b) (mod N)?
static bool point_in_half_open_interval(int x, int a, int b)
    return modN(x - a) < modN(b - a);

At this point, you should also be noticing something funny about that code (it’s a bit harder to see in the definitions above since the detour through the distance functions obscures the actual computation going on); namely, the fact that we’re subtracting a and then reducing mod N on both sides.

It’s confession time. This whole notion with measuring distances along the circle is not how I derived this; it’s the best way I know to think about this problem, but that knowledge came in retrospect. I got here the long way round, with several false starts. The key idea turned out to be thinking about symmetries, which is always worth doing if the problem you’re working on has any; see for example this post I wrote 6 years ago!

In this case, the integers mod N are a cyclic group—they wrap around. That’s why it makes sense to draw them as a circle. And that’s why being attached to any particular point being 0 is a bit silly: a circle has continuous rotational symmetry, and our discrete cyclic group has N-fold rotational symmetry. We get the same image if we rotate by an N’th of a full turn. And going back to our setting, since we really only care about distances, we can cyclically rotate our points around any way we want without changing the results.

What these tests really do is exploit this symmetry, “translating” (or more appropriately, rotating) everything by -a. This turns testing x against the interval [a,b] (or the half-open variants) into testing (x-a) \bmod N against [0,(b-a) \bmod N). Once we move the start point of the interval to 0, we don’t have to worry about wrap-around happening in inconvenient places anymore; comparing the integers directly works fine. Score one for symmetry.

Interval overlap

This takes care of points. Now for something trickier: how do we test for interval overlap?

The standard tests for interval overlap are slick, but not really applicable in our situation: the center-extent trick actually generalizes just fine to integers mod N and much more general settings (it works in arbitrary metric spaces provided the sets in question can be expressed as balls in the target metric) but is not ideal in a discrete setting, and the direct tests rely heavily on an order structure we don’t have in our cyclic world.

But what we do have are reasonably simple point-in-interval tests. Can we build an interval overlap test out of them? Well, we can try. Suppose we have two intervals [a,b] and [c,d]. For example, surely, if the “left” endpoint c of [c,d] falls inside [a,b], the two intervals overlap – after all, we know a point that is in both of them! And by symmetry, swapping the roles of the two intervals, if a falls inside [c,d], we again must have overlap.

If you start drawing a few pictures of intervals in various configurations, you’ll soon notice that testing both of these conditions seems to detect all configurations where the intervals actually overlap (and this works with intervals in the real numbers as well as in our discrete setting). The question is, can we prove that testing these two points is sufficient, or are we merely lucky? It turns out we didn’t just luck out; here’s a quick proof:

Lemma: ([a,b] \cap [c,d]) \ne \emptyset \Leftrightarrow (c \in [a,b]) or (a \in [c,d]). In words, the two (non-empty) intervals overlap if and only if at least one of a or c is inside the respective other interval.
Proof: “\Leftarrowc \in [c,d], so if we also have c \in [a,b], then that gives us a point in the intersection of [a,b] and [c,d], which therefore can’t be empty. Likewise with a. Therefore, if either of the two conditions on the right-hand side holds, we indeed have a non-empty intersection.
\Rightarrow” The intersection isn’t empty, so take x \in ([a,b] \cap [c,d]). x is in both intervals. Informally, we now “slide” x to the “left” (in negative direction) through the intersection until we hit either of the interval end points a or c. Formally, consider the distances from the interval start points to x d_a := d^{\,+}(a,x) and d_c := d^{\,+}(c,x). Suppose that da ≤ dc. Then we have

0 \le d^{\,+}(c,a) = a - c = a - x + x - c
= (x - c) - (x - a) = d_c - d_a \le d_c \le d^{\,+}(c,d)

In words, a’s distance from c, in positive direction, is no more than x’s; since x already was inside of [c,d], surely a must be too. If instead da > dc, we learn that c is inside [a,b]. In either case, this proves the claim.

This takes care of closed intervals. Note that this proof leans heavily on the intervals in question being non-empty. We can readily adapt it to half-open intervals of the form [a,b), but we do need to make sure to catch either interval being empty first, in which case the intersection of intervals is necessarily empty too. Likewise, you can also easily adapt it to half-open intervals of type (a,b], but in this case you want to be using the “right” end points of intervals for testing, not the left end points.

This may all sound complicated, but the implementation is actually quite short and simple:

// do [a,b] and [c,d] overlap?
static bool closed_intervals_overlap(int a, int b, int c, int d)
    return modN(c - a) <= modN(b - a) || modN(a - c) <= modN(d - c);

// do [a,b) and [c,d) overlap?
static bool half_open_intervals_overlap(int a, int b, int c, int d)
    int w0 = modN(b - a);
    int w1 = modN(d - c);

    return (w1 != 0 && modN(c - a) < w0) ||
           (w0 != 0 && modN(a - c) < w1);

And there we go. Interval overlap tests mod N.

Implementation notes and variations

The most common case in systems programming involves power-of-2 N. In that scenario, modN is readily implemented via bit masking as a single binary AND operation. If N is 232 or 264, 32- or 64-bit unsigned integers can (and should) be used directly, in which case there is no need for explicit masking in the code (although on some 64-bit architectures, unsigned 32-bit arithmetic compiles into 64-bit arithmetic with masking anyway). This is one of the rare cases where unsigned integer overflow works exactly the way we need. In this case, you want to be using unsigned integers throughout.

As presented, we’re working with intervals given in terms of two end points, because that’s the most common presentation. But computationally, all of the functions in the code shown actually use a single endpoint (on the “closed” end) along with the width of the interval. That’s the modN(b - a) and modN(d - c) terms we keep computing. So if you’re working a lot with intervals mod N, or storing them, you probably want to consider working in that representation directly.

The intervals in this article are intentionally defined with their endpoints coming from \mathbb{Z}_N, to force us to think about the effects of the cyclical wrap-around in the integers mod N cleanly. Now that we’ve spent some time thinking it through, we can relax that requirement. In particular, when using ring buffers with what I call the “virtual stream” model (read/write cursors not reduced mod N), it can make sense to just not reduce the interval lengths mod N at all—that is, turn occurrences of modN(b - a) and modN(d - c) in the code into plain b - a and d - c, respectively, or reduce with respect to a modulus that’s a larger multiple of N. Among other things, this allows us to have half-open intervals covering the entirety of \mathbb{Z}_N, something the fully reduced variant cannot easily do.

And as a final closing remark, this article comes in at 2500 words to explain less than 20 lines of code doing nothing but straightforward integer arithmetic. That has to be some sort of personal record. I’m not sure if that’s good or bad.

The Modulith

Much has been written about all the myriad ways to go wrong when writing software. Poor management; scope creep; too little structure, not modular enough, and it’s a “big ball of mud”. Too much (or too rigid) and it’s a “software crystal”, impossible to alter. And so on.

Suppose you get all that right, and actually ship a useful system to users, it solves their problems well enough, and the code is reasonably clean, has a sound design and a modular structure with interface that, while not perfect, work okay. That’s about as good as it gets! Well done.

Alas, you’re not out of the woods. Success has its own failure modes, and I want to talk about one in particular that affects modular designs.

The arguments for modularity are well known: separating concerns breaks large systems down into smaller constituent parts that can be understood individually, with clearly-defined interfaces between them. Ideally, modules are designed so they can be developed and tested in isolation, and if an individual module is found wanting (say it’s unreliable, faulty or there are simply better solutions available), it can be replaced with another module provided it has the same interface.

And there really are systems like that, where the interfaces are rigid and well-specified, components come only in a handful of “shapes”, and everything cleanly fits together, like Lego bricks. But more commonly, shipping systems look like this (prepare for an extended metaphor):

"Dry stone wall, Island of Mull". Photo by Jan Smith, CC-BY 2.0

“Dry stone wall, Island of Mull”. Photo by Jan Smith, CC-BY 2.0

The modules have irregular shapes and irregular sizes. Some are big, some are quite small. Some closely align with their neighbors; others have big gaps between them. They add up to a coherent whole, but it’s clear that for most of the development time, none of these components really had to have any particular shape. Occasionally you need a small piece with a specific shape to fill a gap, but for the most part, you just work with the materials you have.

The result is still “modular”; it’s built out of smaller pieces, each with their own clearly defined boundaries. But it’s not very regular, and outright weird in some places. That chipped corner on one of the bottom pieces was just an early mistake, but it made for a good place to stick that one flat rock on and somehow that ended up being one of the primary supports for the whole thing. And while building that wall, “I need a rock, about this big” was the only constraint you really had, and you just sort of piled it on. But when repairing it after one of the pieces has been damaged, working out the right shape, finding a replacement that meets that description and getting it in place is really tricky, fiddly work. (End of extended metaphor.)

Know any systems like that? I certainly do. And the end result is what I hereby dub a “modulith” (I am sure this has been observed and named before, but I haven’t seen it elsewhere yet). Made out of small, distinct, cleanly separable pieces, but still, everything but the topmost layer is actually kind of hard to disentangle from the rest, due to a myriad of small interactions with everything surrounding it. Because once you use a module as a building block for something else, there’s a disturbing tendency for all of its remaining quirks and bugs to effectively become part of the spec, as other modules (implicitly or explicitly) start to rely on them.

This is related to, but distinct from, other concepts such as software entropy and technical debt, which primarily deal with effects within a single codebase over time. Here we are dealing with something slightly different: as a particular component is successfully used or re-used (in unmodified form!), the users of said code tend to end up relying (often inadvertently) on various unspecified or underspecified behaviors, implicitly assuming a stronger contract than the component is actually supposed to provide. At that point, your choices are to either make those assumed behaviors actually contractual (not breaking existing code at the cost of severely constraining future evolution of said component), or to fix all users that make stronger assumptions than what is guaranteed (easier said than done if the component in question is popular; often causes ripple effects that break yet more code).

Either way, I don’t have any good solutions, but I’m feeling whimsical and haven’t seen this exact problem described before, so I’m naming it. In the extremely likely case that this has already been described and named by someone else, I’d appreciate a reference!

Models for adaptive arithmetic coding

My colleague Charles Bloom recently announced Oodle LZNA (a new compressor for RAD’s Oodle compression library), which usually beats LZMA on compression while being much faster to decode. The major innovation in LZNA is switching the coding back-end from bit-wise adaptive modeling to models with larger alphabets. With traditional arithmetic coders, you take a pretty big speed hit going from binary to larger alphabets, and you end up leaning heavily on integer divides (one per symbol usually), which is the neglected bastard stepchild of every integer instruction set. But as of last year, we have the ANS family of coders, which alters the cost landscape considerably. Most interesting for use with adaptive models is rANS, which is significantly cheaper than conventional large-alphabet arithmetic (and without divisions in the decoder), but can still code directly from a cumulative probability distribution, just like other arithmetic coders. In short, rANS speeds up non-binary alphabets enough that it makes sense to revisit adaptive modeling for larger alphabets in practical coder design, something that’s been pretty much on hold since the late 90s.

In this post, I will try to explain the high-level design behind the adaptive models that underlie LZNA, and why they are interesting. But let’s start with the models they are derived from.

Adaptive binary models

The canonical adaptive binary model simply keeps count of how many 0s and 1s have been seen:

counts[0] = 1;     // or different init
counts[1] = 1;
prob_for_1 = 0.5;  // used for coding/decoding
void adapt(int bit)
    prob_for_1 = counts[1] / (counts[0] + counts[1]);

This is the obvious construction. The problem with this is that probabilities move quickly after initialization, but very soon start to calcify; after a few thousand symbols, any individual observation barely changes the probabilities at all. This sucks for heterogeneous data, or just data where the statistics change over time: we spend a long time coding with a model that’s a poor fit, trying to “unlearn” the previous stats.

A better approach for data with statistics that change over time is to gradually “forget” old stats. The resulting models are much quicker to respond to changed input characteristics, which is usually a decent win in practice. The canonical “leaky” adaptive binary model is, essentially, an exponential moving average:

prob_for_1 = 0.5;
f = 1.0 - 1.0 / 32.0;   // adaptation rate, usually 1/pow2

void adapt(int bit)
    if (bit == 0)
        prob_for_1 *= f;
        prob_for_1 = prob_for_1 * f + (1.0 - f);

This type of model goes back to Howard and Vitter in “Practical implementations of arithmetic coding” (section 3.4). It is absolutely everywhere, albeit usually implemented in fixed point arithmetic and using shifts instead of multiplications. That means you will normally see the logic above implemented like this:

scale_factor = 4096; // .12 fixed point
prob_for_1_scaled = scale_factor / 2;

void adapt(int bit)
    if (bit == 0)
        prob_for_1_scaled -= prob_for_1_scaled >> 5;
        prob_for_1_scaled += (scale_factor - prob_for_1_scaled) >> 5;

This looks like it’s a straightforward fixed-point version of the code above, but there’s a subtle but important difference: the top version, when evaluated in real-number arithmetic, can have prob_for_1 get arbitrarily close to 0 or 1. The bottom version cannot; when prob_for_1_scaled ≤ 31, it cannot shrink further, and likewise, it cannot grow past scale_factor – 31. So the version below (with a fixed-point scale factor of 4096 and using a right-shift of 5) will keep scaled probabilities in the range [31,4065], corresponding to about [0.0076, 0.9924].

Note that with a shift of 5, we always stay 31 steps away from the top and bottom end of the interval – independent of what our scale factor is. That means that, somewhat counter-intuitively, that both the scale factor and the adaptation rate determine what the minimum and maximum representable probability are. In compression terms, the clamped minimum and maximum probabilities are equivalent to mixing a uniform model into a “real” (infinite-precision) Howard/Vitter-style adaptive model, with low weight. Accounting for this fact is important when trying to analyze (and understand) the behavior of the fixed-point models.

A symmetric formulation

Implementations of binary adaptive models usually only track one probability; since there are two symbols and the probabilities must sum to 1, this is sufficient. But since we’re interested in models for larger than binary alphabets, let’s aim for a more symmetric formulation in the hopes that it will generalize nicely. To do this, we take the probabilities p0 and p1 for 0 and 1, respectively, and stack them into a column vector p:

\mathbf{p} = \begin{pmatrix} p_0 \\ p_1 \end{pmatrix}

Now, let’s look at what the Howard/Vitter update rule produces for the updated vector p‘ when we see a 0 bit (this is just plugging in the update formula from the first program above and using p1 = 1 – p0):

\mathbf{p}' = \begin{pmatrix} 1 - f p_1 \\ f p_1 \end{pmatrix}  = \begin{pmatrix} 1 - f (1 - p_0) \\ f p_1 \end{pmatrix}  = f \mathbf{p} + \begin{pmatrix} 1 - f \\ 0 \end{pmatrix}

And for the “bit is 1” case, we get:

\mathbf{p}' = \begin{pmatrix} 1 - (f p_1 + (1 - f)) \\ f p_1 + (1 - f) \end{pmatrix}  = \begin{pmatrix} f p_0 \\ f p_1 + (1 - f) \end{pmatrix}  = f \mathbf{p} + \begin{pmatrix} 0 \\ 1 - f \end{pmatrix}

And just like that, we have a nice symmetric formulation of our update rule: when the input bit is i, the updated probability vector is

\mathbf{p}' = f \mathbf{p} + (1 - f) \mathbf{e}_i

where the ei are the canonical basis vectors. Once it’s written in this vectorial form, it’s obvious how to generalize this adaptation rule to a larger alphabet: just use a larger probability vector! I’ll go over the implications of this in a second, but first, let me do a brief interlude for those who recognize the shape of that update expression.

Aside: the DSP connection

Interpreted as a discrete-time system

\mathbf{p}_{k+1} = f \mathbf{p}_k + (1-f) \mathbf{e}_{s_k}

where the sk denote the input symbol stream, note that we’re effectively just running a multi-channel IIR filter here (one channel per symbol in the alphabet). Each “channel” is simple linearly interpolating between its previous state and the new input value – it’s a discrete-time leaky integrator, a 1st-order low-pass filter.

Is it possible to use other low-pass filters? You bet! In particular, one popular variant on the fixed-point update rule has two accumulators with different update rates and averages them together. The result is equivalent to a 2nd-order (two-pole) low-pass filter. Its impulse response, corresponding to the weight of symbols over time, initially decays faster but has a longer tail than the 1st-order filter.

And of course you can use FIR low-pass filters too: for example, the incremental box filters described in my post Fast blurs 1 correspond to a “sliding window” model. This approach readily generalizes to more general piecewise-polynomial weighting kernels using the approach described in Heckberts “Filtering by repeated integration”. I doubt that these are actually useful for compression, but it’s always nice to know you have the option.

Can we use arbitrary low-pass filters? Alas, we can’t: we absolutely need linear filters with unit gain (so that the sum of all probabilities stays 1 exactly), and furthermore our filters need to have non-negative impulse responses, since we’re dealing with probabilities here, and probabilities need to stay non-negative. Still, we have several degrees of freedom here, and at least in compression, this space is definitely under-explored right now. I’m sure some of it will come in handy.

Practical non-binary adaptive models

As noted before, the update expression we just derived

\mathbf{p}' = f \mathbf{p} + (1 - f) \mathbf{e}_i

quite naturally generalizes to non-binary alphabets: just stack more than two probabilities into the vector p. But it also generalizes in a different direction: note that we’re just linearly interpolating between the old model p and the “model” for the newly observed symbol, as given by ei. We can change the model we’re mixing in if we want different update characteristics. For example, instead of using a single spike at symbol i (as represented by ei), we might decide to slightly boost the probabilities for adjacent values as well.

Another option is to blend between the “spike” and a uniform distribution with a given weight u:

\mathbf{p}' = f \mathbf{p} + (1 - f) ((1-u) \mathbf{e}_i + u\mathbf{1})

where 1 is the vector consisting of all-1s. This is a way to give us the clamping of probabilities at some minimum level, like we get in the fixed-point binary adaptive models, but in a much more controlled fashion; no implicit dependency on the scaling factor in this formulation! (Although of course an integer implementation will still have some rounding effects.) Note that, for a fixed value of u, the entire right-hand side of that expression only depends on i and does not care about the current value of p at all.

Okay, so we can design a bunch of models using this on paper, but is this actually practical? Note that, in the end, we will still want to get fixed-point integer probabilities out of this, and we would greatly prefer them to all sum to a power of 2, because then we can use rANS. This is where my previous post “Mixing discrete probability distributions” comes in. And this is where I’ll stop torturing you and skip straight to the punchline: a working adaptation step, with pre-computed mix-in models (depending on i only), can be written like this:

int rate = 5;       // use whatever you want!
int CDF[nsyms + 1]; // CDF[nsyms] is pow2
int mixin_CDFs[nsyms][nsyms + 1];

void adapt(int sym)
    // no need to touch CDF[0] and CDF[nsyms]; they always stay
    // the same.
    int *mixin = mixin_CDFs[sym];
    for (int i = 1; i < nsyms; ++i)
         CDF[i] += (mixin[i] - CDF[i]) >> rate;

which is a pretty sweet generalization of the binary model update rule. As per the “Mixing discrete probability distributions”, this has several non-obvious nice properties. In particular, if the initial CDF has nonzero probability for every symbol, and all the mixin_CDFs do as well, then symbol probabilities will never drop down to zero as a result of this mixing, and we always maintain a constant total throughout. What this variant doesn’t handle very well is round-off; there’s better approaches (although you do want to make sure that something like the spacing lemma from the mixing article still holds), but this variant is decent, and it’s certainly the most satisfying because it’s so breathtakingly simple.

Note that the computations for each i are completely independent, so this is data-parallel and can be written using SIMD instructions, which is pretty important to make larger models practical. This makes the update step fast; you do still need to be careful in implementing the other half of an arithmetic coding model, the symbol lookup step.

Building from here

This is a pretty neat idea, and a very cool new building block to play with, but it’s not a working compressor yet. So what we can do with this, and where is it interesting?

Well. For a long time, in arithmetic coding, there were basically two choices. You could use larger models with very high per-symbol overhead: one or more divisions per symbol decoded, a binary search to locate the right symbol… pretty expensive stuff. Expensive enough that you want to make sure you code very few symbols this way. Adaptive symbols, using something like Fenwick trees, were even worse, and limited in the set of update rules they could support. In practice, truly adaptive large-alphabet models were exceedingly rare; if you needed some adaptation, you were more likely to use a deferred summation model (keep histogramming data and periodically rebuild your CDF from the last histogram) than true per-symbol adaptation, because it was so damn expensive.

At the other extreme, you had binary models. Binary arithmetic coders have much lower per-symbol cost than their non-binary cousins, and good binary models (like the fixed-point Howard/Vitter model we looked at) are likewise quite cheap to update. But the problem is that with a binary coder, you end up sending many more symbols. For example, in LZMA, a single 256-symbol alphabet model gets replaced with 8 binary coding steps. While the individual binary coding steps are much faster, they’re typically not that much faster that you can do 8× as many and still come out much faster. There’s ways to reduce the number of symbols processed. For example, instead of a balanced binary tree binarization, you can build a Huffman tree and use that instead… yes, using a Huffman tree to do arithmetic coding, I know. This absolutely works, and it is faster, but it’s also a bit of a mess and fundamentally very unsatisfying.

But now we have options in the middle: with rANS giving us larger-alphabet arithmetic decoders that are merely slightly slower than binary arithmetic coding, we can do something better. The models described above are not directly useful on a 256-symbol alphabet. No amount of SIMD will make updating 256 probability estimates per input symbol fast. But they are useful on medium-sized alphabets. The LZNA in “Oodle LZNA” stands for “LZ-nibbled-ANS”, because the literals are split into nibbles: instead of having a single adaptive 256-symbol model, or a binary tree of adaptive binary models, LZNA has much flatter trees of medium-sized models. Instead of having one glacially slow decode step per symbol, or eight faster steps per symbol, we can decode an 8-bit symbol in two still-quite-fast steps. The right decomposition depends on the data, and of course on the relative speeds of the decoding steps. But hey, now we have a whole continuum to play with, rather than just two equally unsavory points at the extreme ends!

This is not a “just paste it into your code” solution; there’s still quite a bit of art in constructing useful models out of this, several fun tricks in actually writing a fast decoder for this, some subtle variations on how to do the updates. And we haven’t even begun to properly play with different mix-in models and “integrator” types yet.

Charles’ LZNA is the first compressor we’ve shipped that uses these ideas. It’s not gonna be the last.

UPDATE: As pointed out by commenter “derf_” on Hacker News, the non-binary context modeling in the current Daala draft apparently describes the exact same type of model. Details here (section 2.2, “Non-binary context modeling”, variant 3). This apparently was proposed in 2012. We (RAD) weren’t aware of this earlier (pity, that would’ve saved some time); very cool, and thanks derf for the link!

Triangular numbers mod 2^n

The triangular numbers

\displaystyle T_k := \sum_{j=1}^k j = \binom{k+1}{2} = \frac{k(k+1)}{2}

are a standard sequence used in quadratic probing of open hash tables. For example, they’re used in Google’s sparse_hash and dense_hash, generally considered to be very competitive hash table implementations.

You can find lots of places on the web mentioning that the resulting probe sequence will visit every element of a power-of-2 sized hash table exactly once; more precisely, the function f(k) = T_k \mod 2^m is a permutation on \{ 0, 1, \dots, 2^m-1 \}. But it’s pretty hard to find a proof; everybody seems to refer back to Knuth, and in The Art of Compute Programming, Volume 3, Chapter 6.4, the proof appears as an exercise (number 20 in the Second Edition).

If you want to do this exercise yourself, please stop reading now; spoilers ahead!

Anyway, turns out I arrived at the solution very differently from Knuth. His proof is much shorter and slicker, but pretty “magic” and unmotivated, so let’s take the scenic route! The first step is to look at a bunch of small values and see if we can spot any patterns.

k 0 1 2 3 4 5 6 7 8 9 10 11
Tk 0 1 3 6 10 15 21 28 36 45 55 66
Tk mod 8 0 1 3 6 2 7 5 4 4 5 7 2
Tk mod 4 0 1 3 2 2 3 1 0 0 1 3 2
Tk mod 2 0 1 1 0 0 1 1 0 0 1 1 0

And indeed, there are several patterns that might be useful: looking at the row for “mod 2”, we see that it seems to be just the values 0, 1, 1, 0 repeating, and that sequence itself is just 0, 1 followed by its reverse. The row for “mod 4” likewise looks like it’s just alternating between normal and reverse copies of 0, 1, 3, 2, and the “mod 8” row certainly looks like it might be following a similar pattern. Can we prove this?

First, the mirroring suggests that it might be worthwhile to look at the differences

\displaystyle T_{2n-1-k} - T_k = \binom{2n-k}{2} - \binom{k+1}{2}
\displaystyle = (2n-k)(2n - (k+1))/2 - (k+1)k/2
\displaystyle = 2n^2 - n(2k+1) + k(k+1)/2 - k(k+1)/2
\displaystyle = 2n^2 - n(2k+1)

Both terms are multiples of n, so we have T_{2n-1-k} - T_k \equiv 0 \pmod{n}, which proves the mirroring (incidentally, for any n, not just powers of 2). Furthermore, the first term is a multiple of 2n too, and the second term almost is: we have T_{2n-1-k} - T_k \equiv -n \pmod{2n}. This will come in handy soon.

To prove that T_k \bmod n is 2n-periodic, first note the standard identity for triangular numbers

\displaystyle T_{a+b} = \sum_{j=1}^{a+b} j = \sum_{j=1}^a j + \sum_{j=a+1}^{a+b} j = T_a + \sum_{j=1}^{b} (j + a)
\displaystyle = T_a + \sum_{j=1}^b j + ab = T_a + T_b + ab

in particular, we have

\displaystyle T_{k + 2n} = T_k + T_{2n} + 2kn = T_k + n(2n+1) + 2kn \equiv T_k \pmod{n}

Again, this is for arbitrary n ≥ 1. So far, we’ve proven that for arbitrary positive integer n, the sequence T_k \bmod n is 2n-periodic, with the second half being a mirrored copy of the first half. It turns out that we can wrangle this one fact into a complete proof of the T_k \bmod 2^m, 0 ≤ k < 2m being a permutation fairly easily by using induction:

Basis (m=0): T_k \bmod 1=0, and the function f_0(0) := 0 is a permutation on { 0 }.
Inductive step: Suppose f_m(k) := T_k \bmod 2^m is a permutation on \{ 0, \dots, 2^m-1 \}. Then the values of f_{m+1}(k) = T_k \bmod 2^{m+1} must surely be pairwise distinct for 0 ≤ k < 2m, since they’re already distinct mod 2m. That means we only have to deal with the second half. But above (taking n=2m), we proved that

f_{m+1}(2^{m+1}-1-k) = T_{2^{m+1}-1-k} \equiv T_k - 2^m \pmod{2^{m+1}}

and letting k run from 0 to 2m-1, we notice that mod 2m, these values are congruent to the values in the first half, but mod 2m+1, they differ by an additional term of -n. This implies that the values in the second half are pairwise distinct both from the first half and from each other. This proves that fm+1 is injective, and because it’s mapping the 2m+1-element set \{ 0, \dots, 2^{m+1} - 1 \} onto itself, it must be a permutation.

Which, by mathematical induction, proves our claim.

Mixing discrete probability distributions

Let’s talk a bit about probability distributions in data compression; more specifically, about a problem in dealing with multi-symbol alphabets during adaptation.

Distributions over a binary alphabet are easy to mix (in the “context mixing” sense): they can be represented by a single scalar (I’ll use the probability of the symbol being a ‘1’, probability of ‘0’ works too) which are easily combined with other scalars. In practice, these probabilities are represented as fixed-size integers in a specific range. A typical choice is p=k/4096, 0 \le k \le 4096. Often, p=0 (“it’s certainly a 0”) and p=1 (certain 1) are excluded, because they only allow us to “encode” (using zero bits!) one of the two symbols.

Multi-symbol alphabets are trickier. A binary alphabet can be described using one probability p; the probability of the other symbol must be 1-p. Larger alphabets involve multiple probabilities, and that makes things trickier. Say we have a n-symbol alphabet. A probability distribution for such an alphabet is, conventionally, a vector p \in \mathbb{R}^n such that p_k \ge 0 for all 1 \le k \le n and furthermore \sum_{k=1}^n p_k = 1. In practice, we will again represent them using integers. Rather than dealing with the hassle of having fractions everywhere, let’s just define our “finite-precision distributions” as integer vectors p \in \mathbb{Z}^n, again p_k \ge 0 for all k, and \sum_{k=1}^n p_k = T where T is the total. Same as with the binary case, we would like T to be a constant power of 2. This lets us use cheaper variants of conventional arithmetic coders, as well as rANS. Unlike the binary case, maintaining this invariant takes explicit work, and it’s not cheap.

In practice, the common choice is to either drop the requirement that T be constant (most adaptive multi-symbol models for arithmetic coding take that route), or switch to a semi-adaptive model such as deferred summation that performs model updates in batches, which can either pick batches such that the constant sum is automatically maintained, or at least amortize the work spent enforcing it. A last option is using a “sliding window” style model that just uses a history of character counts over the last N input symbols. This makes it easy to maintain the constant sum, but it’s pretty limited.

A second problem is mixing – both as an explicit operation for things like context mixing, and as a building block for model updates more sophisticated than simply incrementing a counter. With binary models, this is easy – just a weighted sum of probabilities. Multi-symbol models with a constant sum are harder: for example, say we want to average the two 3-symbol models p=\begin{pmatrix}3 & 3 & 2\end{pmatrix} and q=\begin{pmatrix}1 & 6 & 1\end{pmatrix} while maintaining a total of T=8. Computing \frac{1}{2}(p+q) = \begin{pmatrix}2 & 4.5 & 1.5\end{pmatrix} illustrates the problem: the two non-integer counts have to get rounded back to integers somehow. But if we round both up, we end up at a total of 9; and if we round both down, we end up with a total of 7. To achieve our desired total of 8, we need to round one up, and one down – but it’s not exactly obvious how we should choose on any given symbol! In short, maintaining a constant total while mixing in this form proves to be tricky.

However, I recently realized that these problems all disappear when we work with the cumulative distribution function (CDF) instead of the raw symbol counts.

Working with cumulative counts

For a discrete probability distribution p with total T, define the corresponding cumulative probabilities:

P_j := \sum_{k=1}^j p_k, \quad 0 \le j \le n

P0 is an empty sum, so P0=0. On the other end, Pn = T, since that’s just the sum over all values in p, and we know the total is T. For the elements in between, pk ≥ 0 implies that Pk ≥ Pk-1, i.e. P is monotonically non-decreasing. Conversely, given any P with these three properties, we can compute the differences between adjacent elements and determine the underlying distribution p.

And it turns out that while mixing quantized symbol probabilities is problematic, mixing cumulative distribution functions pretty much just works. To wit: suppose we are given two CDFs P and Q with the same total T, and a blending factor \lambda \in [0,1], then define:

\tilde{R}_j := (1-\lambda) P_j + \lambda Q_j, \quad 0 \le j \le n

Note that because summation is linear, we have

\tilde{R}_j = (1-\lambda) \left(\sum_{k=1}^j p_k\right) + \lambda \left(\sum_{k=1}^j q_k\right)
= \sum_{k=1}^j ((1-\lambda) p_k + \lambda q_k)

so this is indeed the CDF corresponding to a blended model between p and q. However, we’re still dealing with real values here; we need integers, and the easiest way to get there is to just truncate, possibly with some rounding bias b \in [0,1):

R_j := \lfloor \tilde{R}_j + b \rfloor

It turns out that this just works, but this requires proof, and to get there it helps to prove a little lemma first.

Spacing lemma: Suppose that for some j, P_j - P_{j-1} \ge m and Q_j - Q_{j-1} \ge m where m is an arbitrary integer. Then we also have R_j - R_{j-1} \ge m.

Proof: We start by noting that

\tilde{R}_j - \tilde{R}_{j-1} = (1-\lambda) (P_j - P_{j-1}) + \lambda (Q_j - Q_{j-1}) \ge (1-\lambda) m + \lambda m = m

and since m is an integer, we have

R_j - R_{j-1} = \lfloor \tilde{R}_j + b \rfloor - \lfloor \tilde{R}_{j-1} + b \rfloor
= \lfloor \tilde{R}_{j-1} + (\tilde{R}_j - \tilde{R}_{j-1}) + b \rfloor - \lfloor \tilde{R}_{j-1} + b \rfloor
\ge \lfloor \tilde{R}_{j-1} + m + b \rfloor - \lfloor \tilde{R}_{j-1} + b \rfloor
= m + \lfloor \tilde{R}_{j-1} + b \rfloor - \lfloor \tilde{R}_{j-1} + b \rfloor = m

which was our claim.

Using the lemma, it’s now easy to show that R is a valid CDF with total T: we need to establish that R0=0, Rn=T, and show that R is monotonic. The first two are easy, since the P and Q we’re mixing both agree at these points and 0≤b<1.

R_0 = \lfloor \tilde{R}_0 + b \rfloor = \lfloor b \rfloor = 0
R_n = \lfloor \tilde{R}_n + b \rfloor = \lfloor T + b \rfloor = T

As for monotonicity, note that R_j \ge R_{j-1} is the same as R_j - R_{j-1} \ge 0 (and the same for P and Q). Therefore, we can apply the spacing lemma with m=0 for 1≤j≤n: monotonicity of P and Q implies that R will be monotonic too. And that’s it: this proves that R is valid CDF for a distribution with total T. If we want the individual symbol frequencies, we can recover them as r_k = R_k - R_{k-1}.

Note that the spacing lemma is a fair bit stronger than just establishing monotonicity. Most importantly, if pk≥m and qk≥m, the spacing lemma tells us that rk≥m – these properties carry over to the blended distribution, as one would expect! I make note of this because this kind of invariant is often violated by approximate techniques that first round the probabilities, then fudge them to make the total come out right.


This gives us a way to blend between two probability distributions while maintaining a constant total, without having to deal with dodgy ad-hoc rounding decisions. This requires working in CDF form, but that’s pretty common for arithmetic coding models anyway. As long as the mixing computation is done exactly (which is straightforward when working in integers), the total will remain constant.

I only described linear blending, but the construction generalizes in the obvious way to arbitrary convex combinations. It is thus directly applicable to mixing more than two distributions while only rounding once. Furthermore, given the CDFs of the input models, the corresponding interval for a single symbol can be found using just two mixing operations to find the two interval end points; there’s no need to compute the entire CDF for the mixed model. This is in contrast to direct mixing of the symbol probabilities, which in general needs to look at all symbols to either determine the total (if a non-constant-T approach is used) or perform the adjustments to make the total come out to T.

Furthermore, the construction shows that probability distributions with sum T are closed under “rounded convex combinations” (as used in the proof). The spacing lemma implies that the same is true for a multitude of more restricted distributions: for example, the set of probability distributions where each symbol has a nonzero probability (corresponding to distributions with monotonically increasing, instead of merely non-decreasing, CDFs) is also closed under convex combinations in this sense. This is a non-obvious result, to me anyway.

One application for this (as frequently noted) is context mixing of multi-symbol distributions. Another is as a building block in adaptive model updates that’s a good deal more versatile than the obvious “steal the count from one symbol, add it to another” update step.

I have no idea whether this is new or not (probably not); I certainly hadn’t seen it before, and neither had anyone else at RAD. Nor do I know whether this will be useful to anyone else, but it seemed worth writing up!

A small note on SIMD matrix-vector multiplication

Suppose we want to calculate a product between a 4×4 matrix M and a 4-element vector v:

Mv = \begin{pmatrix}a_x & b_x & c_x & d_x \\ a_y & b_y & c_y & d_y \\ a_z & b_z & c_z & d_z \\ a_w & b_w & c_w & d_w\end{pmatrix} \begin{pmatrix}v_x \\ v_y \\ v_z \\ v_w\end{pmatrix}

The standard approach to computing Mv using SIMD instructions boils down to taking a linear combination of the four column vectors a, b, c and d, using standard SIMD componentwise addition, multiplication and broadcast shuffles.

  // Given M as its four constituent column vectors a, b, c, d,
  // compute r=M*v.
  r = v.xxxx*a + v.yyyy*b + v.zzzz*c + v.wwww*d;

This computes the vector-matrix product using four shuffles, four (SIMD) multiplies, and three additions. This is all bog-standard. And if the ISA we’re working on has free broadcast swizzles (ARM NEON for example), we’re done. But if not, can we do better? Certainly if we know things about M or v: if M has a special structure, or some components of v are known to be always 0, 1 or -1, chances are good we can save a bit of work (whether it makes a difference is another matter). But what if M and v are completely general, and all we know is that we want to transform a lot of vectors with a single M? If v is either given as or returned in SoA form (structure-of-arrays), we can reduce the number of per-vector shuffles greatly if we’re willing to preprocess M a bit and have enough registers available. But let’s say we’re not doing that either: our input v is in packed form, and we want the results packed too. Is there anything we can do?

There’s no way to reduce the number of multiplies or additions in general, but we can get rid of exactly one shuffle per vector, if we’re willing to rearrange M a bit. The trick is to realize that we’re using each of v.x, v.y, v.z, and v.w exactly four times, and that the computations we’re doing (a bunch of component-wise multiplies and additions) are commutative and associative, so we can reorder them, in exact arithmetic anyway. (This type of computation is usually done in floating point, where we don’t actually have associativity, but I’m going to gloss over this.)

Let’s look at the our first set of products, v.xxxx * a. We’re just walking down a column of M, multiplying each element we see by vx. What if we walk in a different direction? Going along horizontals turns out to be boring (it’s essentially the same, just transposed), but diagonals of M are interesting, the main diagonal in particular.

So here’s the punch line: we form four new vectors by walking along diagonals (with wrap-around) as follows:

e = \begin{pmatrix} a_x \\ b_y \\ c_z \\ d_w \end{pmatrix} \quad  f = \begin{pmatrix} b_x \\ c_y \\ d_z \\ a_w \end{pmatrix} \quad  g = \begin{pmatrix} c_x \\ d_y \\ a_z \\ b_w \end{pmatrix} \quad  h = \begin{pmatrix} d_x \\ a_y \\ b_z \\ c_w \end{pmatrix}

Phrasing the matrix multiply in terms of these four vectors, we get:

  r = v*e + v.yzwx*f + v.zwxy*g + v.wxyz*h;

Same number of multiplies and adds, but one shuffle per vector less (because the swizzle pattern for v in the first term is xyzw, which is the natural ordering of v). Also note that forming e, f, g, and h given M in column vector form is also relatively cheap: it’s a matrix transposition with a few post-swizzles to implement the cyclic rotations. If you have M as row vectors (for example because it’s stored in row-major order), it’s even cheaper.

So: multiplying a packed 4-vector with a constant 4×4-matrix takes one shuffle less than the standard approach, if we’re willing to do some preprocessing on M (or store our matrices in a weird layout to begin with). Does this matter? It depends. On current desktop x86 cores, it’s pretty marginal, because SIMD shuffles can execute in parallel (during the same cycle) with additions and multiplications. On older cores with less execution resources, on in-order SIMD CPUs, and on low-power parts, it can definitely help though.

For what it’s worth: if your 4D vectors come from graphics or physics workloads and are actually homogeneous 3-vectors with a constant w=1 and no projective transforms anywhere in sight, you can exploit that structure explicitly for higher gains than this. But I ran into this with a DSP workload (with v just being a vector of 4 arbitrary samples), and in that case it’s definitely useful to know, especially since anything convolution-related tends to have highly diagonal (Toeplitz, to be precise) structure to begin with.


Get every new post delivered to your Inbox.

Join 249 other followers