Skip to content

Papers I like (part 4)

Continued from part 3.

16. O’Donoghue et al. – “Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding” (2016; numerical math/mathematical optimization)

This is a very neat first-order method to solve cone programs that brings together several key advances in the theory over the past 25 years to produce a conceptually simple (and quite short!) yet powerful solver.

Unfortunately, this puts me in a bit of a pickle here, because I expect most of you don’t know what cone programs are, what “operator splitting” is, or for that matter, what is meant by “homogeneous self-dual embedding” or “first-order method” here.

So I’m going to do the same thing I did in part 2 when talking about matrix multiplies and solving linear systems of equations, and back up a whole lot.

We saw linear systems of equations; in matrix form, we were trying to solve
Ax = b
where A \in \mathbb{R}^{n \times n}, x \in \mathbb{R}^n, b \in \mathbb{R}^n. If A is regular (nonzero determinant), this problem has exactly one solution, and barring potential numerical issues if A is ill-conditioned (just ignore that part if you don’t know what it means), we can solve this with standard methods like LU decomposition with partial pivoting – like the non-pivoted LU we saw in part 2, but now we’re allowed to swap rows to avoid certain problems and increase numerical accuracy (still not gonna go into it here). I hope that after reading part 2, you have at least some idea of how that process goes.

On the next step of the ladder up, we have linear least-squares problems. These naturally occur when we have linear systems with more equations than unknowns (more common), or linear systems with more variable than equations (less common, and I’ll ignore that case in the following), and a few others. These types of problems appear in approximation, or when trying to recover parameters of a linear model from noisy measurements where errors are uncorrelated, have expectation 0 and the same variance (as per the Gauss-Markov theorem). They also tend to get applied to problems where they’re really not well-suited at all, because linear least-squares is by far the easiest and most widely-known mathematical optimization method. But I’m getting ahead of myself.

Under the original conditions above, we could actually achieve Ax=b (in exact arithmetic anyway). If we have more equations than variables, we can’t expect to hit an exact solution always; we now have A \in \mathbb{R}^{m \times n}, x \in \mathbb{R}^n, b \in \mathbb{R}^m, m \ge n (at least as many equations as variables), and the best we can hope for is Ax \approx b. We rewrite this into Ax-b \approx 0, because in normed vector spaces, we have good machinery to express that some vector is “close” to zero: we want its norm to be small. That is, we get a problem of the type

\displaystyle \min_{x \in \mathbb{R}^n} \| Ax - b \|

for some norm \| \cdot \| we get to choose. Not surprisingly given the name “least squares”, we choose the 2-norm. I’m going to go over this fairly quickly because least-squares isn’t the point of this post: for a vector x, we have \|x\|_2 = \sqrt{x^T x} where x^T x is just a dot product of x with itself written as a matrix product, if you haven’t seen that notation before. We can get rid of the square root by just squaring everything (standard trick) and get:

\displaystyle \min_{x \in \mathbb{R}^n} \| Ax - b \|_2^2 = \min_x (Ax-b)^T (Ax-b) \\ = \min_x x^T A^T Ax - 2 x^T A^T b + b^T b

That’s a quadratic function. Multi-dimensional, but still quadratic. Basic calculus tells us that we can find the extrema of a quadratic function by finding the spot where its derivative is zero. The derivative of the above expression with respect to x is

\displaystyle 2 (A^T Ax - A^T b)

and setting it to zero brings us to the linear system called the “normal equation”

A^T A x = A^T b

If you followed along with that, great! If not, don’t worry; the only thing you should remember here is that 2-norm means the function we’re trying to minimize is quadratic, which therefore has a linear derivative, thus we can find the minimum by solving a linear system, which we know how to do. Caveat: the normal equations are not generally the preferred way to solve such problems, because squaring A (the A^T A part) greatly amplifies any numerical problems that the matrix might have. Unless you know in advance that A^T A is well-behaved (which you do in several types of problems), you should generally solve such problems with other non-squaring approaches based on say the QR decomposition or SVD, but again, out of scope for this post.

So far, we’ve covered elimination methods for solving linear systems (known to the Chinese back in the 2nd century CE) and linear least-squares; depending on whether you trust Gauss’s assertion that he came up with it before Legendre published it, the latter is either late 18th or early 19th century. The third of the trifecta of fundamental linear numerical algorithms is Linear Programming; this puts us either in the 19th century (if you’re feeling charitable and consider Fourier-Motzkin elimination a viable way to solve them) on in the mid-20th century for the first practical algorithm, the simplex method.

But what is a linear program? A canonical-form linear program is a mathematical optimization problem of the form

maximize: c^T x
subject to: Ax \le b (componentwise), x \ge 0 (also componentwise)

But as the “canonical-form” name suggests, LPs come in many forms. In general, a linear program:

  • must have a linear objective function. It doesn’t need to be an “interesting” objective function; for example, the sub-class of linear feasibility problems “maximizes” the objective function 0, and it’s really just about finding a point that satisfies the constraints.
  • can be either minimization or maximization. They’re trivial to turn into each other: to minimize c^T x, just maximize -c^T x and vice versa.
  • can have a set of linear equality constraints. These are not strictly necessary and hence absent in the canonical form because you can rewrite a linear equality constraint Ex = d into twice the number of inequality constraints d \le Ex \le d, but basically all solvers naturally support them directly and will prefer you to just pass them in unmodified.
  • must have some linear inequality constraints. Either direction works: to get say Ax \ge b, just multiply everything by -1 to yield -Ax \le -b. That’s why the canonical form just has one direction.
  • may or may not require x (the variable vector) to be non-negative. The canonical form demands it, but as with the lack of equality constraints, this one’s really not essential. If you’re forced to work with a solver that insists on non-negative x, you can split each variable that can be negative into two variables x = x^+ - x^- with the constraint x^+ \ge 0, x^- \ge 0. If you have a solver that defaults to unconstrained x and you want non-negative, you just add a bunch of inequality constraints -Ix \le 0 (i.e. you just append a negated identity matrix at the bottom of your constraint matrix A). Either way, still a linear program.

As you can see, while say a linear system is quite easy to recognize (is it a bunch of equations? Are they all linear in the variables of interest? If so, you’ve got a linear system!), LPs can come in quite different-looking forms that are nevertheless equivalent.

At this point, it would be customary to show a made-up linear problem. A factory can produce two kinds of widgets that have different profit margins and consume some number of machines for some amount of time, how many widgets of each kind of type should we be producing to maximize profit, and so forth. I’m not going to do that (check out a textbook on linear programming if you want to get the spiel). Instead, I’m going to show you an example with a very different flavor to hammer the point home that LPs are not at all easy to recognize (or at the very least, that it takes a lot of practice).

Earlier, we went over linear least-squares and saw that the problem reduces to a linear system of equations, because the objective function is quadratic and thus has a linear derivative. Therefore, that trick only works for the 2-norm. What if we’d really rather minimize the residuals (that’s the Ax-b) as measured in another norm? Say we want to minimize the largest deviation from 0 in any of the rows; this corresponds to the infinity-norm (or supremum norm), which is defined as \|x\|_\infty = \max(|x_1|,\dots,|x_n|). So our new problem is to minimize the maximum error in any of the components:

\displaystyle \min_{x \in \mathbb{R}^n} \| Ax - b \|_\infty

Turns out that this is a linear program, even if it really doesn’t look like one. Don’t see it? I don’t blame you. Here’s how it goes:

minimize: t \in \mathbb{R}
subject to: -t \le Ax - b \le t (componentwise).

To make this clear, this is a linear program with our original vector x as its variables, plus an additional auxiliary variable t that we made up. We require that t bound the size of every component of Ax-b; those inequalities are really just saying that if we named the residuals r=Ax-b, we’d have |r_1| \le t, |r_2| \le t and so on. And then as our objective function, we just want the smallest t possible (i.e the smallest upper bound on the absolute value of the residuals), which is how we encode that we’re minimizing our error bound. We don’t constrain x at all; the solver gets to pick x however it wants to make that work. And yeah, I know this feels like a really cheap trick. Get used to it. Mathematical optimization is all about the cheap shots. (Also note that if we had other linear inequality constraints, like say wanting x to be non-negative, we could just throw these in as well. Still a linear program.)

And while we’re on the subject of cheap shots: we have the probably two most important norms in applications, the 2-norm and the infinity-norm; can we get the next important one, the 1-norm as well? Sure we can. Buckle up! The 1-norm of a vector is defined as \|x\|_1 = |x_1| + \cdots + |x_n|, i.e. the sum of the absolute values of the components. What happens if we try to minimize \|Ax - b\|_1? Yup, we again get a linear program, and this time we need a whole bunch of auxiliary variables and it’s even cheesier than the last one:

minimize: r_1^+ + r_1^- + \cdots + r_m^+ + r_m^-
subject to: r^+ - r^- = Ax - b, r^+ \ge 0, r^- \ge 0

This combines the auxiliary variables trick from last time with the “split a vector into positive and negative parts” trick we saw when I talked about how to get rid of unwanted x \ge 0 constraints. The absolute value of residuals split that way is just the r^+ + r^- we see in the objective function. And to top it all off, after introducing 2m auxiliary variables with the two m-element vectors r^+ and r^-, we discuss them all away by adding the linear equality constraint r^+ - r^- = Ax-b that forces them to sum to our residuals.

There are two points I’m trying to make here: first, that a surprising number of problems turns out to be LPs in disguise. And second, that they really don’t need to look like it.

That much for LPs. I still haven’t talked about cone programs or the actual paper, though! But now that I’ve shown how these things crop up, I’m going to reduce the amount of detail a lot.

First: cone programs. What we’ve seen so far are LPs. Cone programs are a generalization, and there’s a whole zoo of them. Cone linear programs are equivalent to regular LPs – effectively just a different canonical form. Then there’s Second-Order Cone Programs (SOCPs) which are a superset of LPs and can also express a whole bunch of other optimization problems including (convex) Quadratic Programming (quadratic objective function, linear inequality constraints) and positive semidefinite quadratically constrained quadratic programming (quadratic objective function and quadratic inequality constraints, all quadratic forms involved positive-semidefinite). This is somewhat surprising at first because SOCPs have a linear objective function and quadratic constraints, but the solution to “I want to express a quadratic objective function using a linear objective function and quadratic constraints” turns out to be yet more cheap shots. (I’ll let you figure out how this one goes yourself, you’ve seen enough by now.)

The next step up from SOCPs is semidefinite programs (SDPs), which can express everything I’ve mentioned so far, and more. And then there’s a couple more cone types that I’m not going to cover here.

What all these cone programs have in common is very similar mathematical foundations (though the neat theory for this is, to my knowledge; relatively recent; we’re in the mid-1990s now!) and very closely related solvers, traditionally interior point methods (IPMs).

IPMs work by encoding the constraints using smooth (differentiable) barrier functions. The idea is that such functions increase rapidly near the bounds of the feasible region (i.e. when you’re getting close to constraints being violated), and are relatively small otherwise. Then use Newton’s method to minimize the resulting smooth objective function. And Newton iteration is a “second-order” method, which means that once the iteration gets close to a solution, it will roughly double the number of correct digits in every step. In practice, this means that you perform a bunch of iterations to get close enough to a solution, and once you do, you’ll be converged to within machine precision in another four to six iterations.

And that’s where we finally get to the paper itself: O’Donoghue et al.’s method is not a second-order method; instead, it’s a first-order method that uses a less sophisticated underlying iteration (ADMM) that nevertheless has some practical advantages. It’s not as accurate as second-order methods, but it needs a lot less memory and individual iterations are much faster than in second-order methods, so it scales to large problems (millions of variables and constraints) that are currently infeasible to solve via IPMs.

The way it works uses a lot of other techniques that were refined over the past 25 years or so; for example, the titular homogeneous self-dual embedding goes back to the mid-90s and is a general way to encode cone programs into a form where initialization is easy (always a tricky problem with iterative methods), there’s no need to do a separate iteration to find a feasible point first, and boundary cases such as infeasible or unbounded problems are easy to detect.

The end result is a fundamentally really simple iteration (equation 17 in the paper) that alternates between solving a linear system – always the same matrix, for what it’s worth – and a projection step. For linear programs, all the latter does is clamp negative values in the iteration variables to zero.

If you’ve made it this far, have at least some understanding of convex optimization and the underlying duality theory, and are interested in the details, definitely check out the paper! If you don’t but want to learn, I recommend checking out Boyd and Vandenberghe’s book Convex Optimization; there’s also a bunch of (very good!) videos of the course online on YouTube. Another good source if you’re at least somewhat comfortable with numerical linear algebra but don’t want to spend the time on the Convex Optimization course is Paul Khuong’s small tutorial “So You Want to Write an LP Solver”.

And this concludes the parts of this series where I info-dump you to death about numerical math. I won’t cover non-linear optimization or anything related here; maybe some other time. :)

17. Braun et al. – Simple and Efficient Construction of Static Single Assignment Form (2013; compilers)

Complete change of pace here; no more math for now, I promise. There will be greek symbols though, because that’s kind of required here. Sorry.

All right, Static Single Assignment form. It’s used in all kinds of compilers these days, but why? Let’s use a small, nonsensical fragment of code in some arbitrary imperative language as demonstration:

  x = z + 5;
  y = 2*x - 1;
  x = 3;
  y = y - x;

We have 3 integer variables x, y, and z, and are doing some random computation. The important thing to note here is that we’re assigning to x and y twice, and we can’t just move statements around without changing the meaning of the program. For example, the second assignment to x has to go after the first assignment to y.

But computationally, that’s just silly. The second assignment to x doesn’t depend on anything; the only problem is that the names x and y refer to different values depending on where we are in the problem. For a compiler, it turns out to be much more useful to separate the (somewhat arbitrary, programmer-assigned) names from the values they correspond to. Any dependencies between the actual operations are an intrinsic property of the computation the program is trying to specify. The names, not so much. So instead, we make a pass over the program where we give a variable a new name whenever we’re assigning to it:

  x1 = z + 5;
  y1 = 2*x1 - 1;
  x2 = 3;
  y2 = y1 - x2;

And now we can reorder the code if we want to; the only requirement is that a variable must be assigned to before it’s used. So for example, this ordering

  x2 = 3;
  x1 = z + 5;
  y1 = 2*x1 - 1;
  y2 = y1 - x2;

has the same meaning – even though we’re now “computing” the second value of x before we compute the first.

This process can be done systematically and straightforwardly for any straight-line block of code without control flow. It’s called “Local Value Numbering” (because we’re numbering values of a variable) and is literally older than digital computers themselves – for example, the idea appears in Ada Lovelace’s writings, penned 1842:

Whenever a Variable has only zeros upon it, it is called 0V; the moment a value appears on it (whether that value be placed there arbitrarily, or appears in the natural course of a calculation), it becomes 1V. If this value gives place to another value, the Variable becomes 2V, and so forth. [..] There are several advantages in having a set of indices of this nature; but these advantages are perhaps hardly of a kind to be immediately perceived, unless by a mind somewhat accustomed to trace the successive steps by means of which the [analytical] engine accomplishes its purposes. We have only space to mention in a general way, that the whole notation of the tables is made more consistent by these indices, for they are able to mark a difference in certain cases, where there would otherwise be an apparent identity confusing in its tendency. In such a case as Vn=Vp+Vn there is more clearness and more consistency with the usual laws of algebraical notation, in being able to write m+1Vn=qVp+mVn.

However, we don’t have SSA yet. That took a bit longer. The problem is what to do with control flow, such as branches and loops. How do we apply value numbering to a program like this?

  x = z;
  y = 0;
  while (x > 0) {
    y = y + x;
    x = x - 1;
  }
  x = 2*y; 

We can certainly apply local value numbering to each basic block of straight-line code, but that doesn’t get us very far in this case. So what do we do? Well, before I do anything else, let me first rewrite that loop slightly to a somewhat less idiomatic but equivalent form that will avoid some trouble with notation in a moment:

  x = z;
  y = 0;
  loop {
    if (x <= 0)
        break;
    y = y + x;
    x = x - 1;
  }
  x = 2*y; 

The problem we have here is that operations such as y = y + x reference different versions of y depending on where we are in the control flow! In the first iteration of the loop, y refers to the initial value set by y = 0; but subsequent iterations reference the y computed by the previous iteration of the loop. We can’t just decide on a single version of any particular variable up-front; it depends on how we got into the current block. However, that’s really all it depends on.

So here’s the key trick that gives us SSA: we introduce a construct called φ functions (phi functions) that returns one of its several arguments, depending on where we entered the current block from. Which of these values correspond to which way to enter the current block needs to be kept track of; since I’m going with pseudo-code here, I’ll just write it in comments. With φ functions, we can transform the whole example program into SSA form:

  x1 = z;
  y1 = 0;
  loop {
    // when entering from outside loop, return first arg
    // otherwise, return second arg.
    x2 = φ(x1, x3);
    y2 = φ(y1, y3);
    if (x2 <= 0)
        break;
    y3 = y2 + x2;
    x3 = x2 - 1;
  }
  x4 = 2*y2;

Note that things get a bit tricky now: the φ functions for x2 and y2 have to reference the values x3 and y3 that are defined later in program order, and the calculation for x4 needs to pick up y2 (and not say y3), because that’s always the most recent definition of y when control flow reaches the computation of x4.

Why do we care about forming SSA? It turns out that this representation is very convenient for all kinds of program analysis, and well-suited to various transforms compilers wish to perform.

In this simple example, it’s not hard to construct the SSA form by hand. But for it to be useful in compilers, we need an algorithm, and it better be efficient – both in the sense that it doesn’t run too long, and in the sense that it shouldn’t increase the size of the program too much. In particular, we’d like to only insert phi functions that are truly necessary, instead of say inserting phis for all visible variables in every single block.

And that’s where this paper comes in. The “standard” SSA construction algorithm is due to Cytron et al., from a 1991 paper; it’s efficient and works fine, and is used in many production compilers, including LLVM. But it needs some fairly complicated machinery to do its job, and is not really suited to incremental construction.

That’s why I like this paper. It uses only elementary techniques, is simple to describe and understand, gives good results, and is suitable for quickly patching up an existing SSA-form program are modifications that would otherwise violate SSA form.

18. Lamport, Palais – “On the Glitch Phenomenon” (1976; hardware/physics)

This link goes to Lamport’s publication page, where he writes a few notes on the paper (and his difficulties in getting it published).

Both the notes and the paper are relatively short and unlike several of the other papers I’ve covered, I don’t have any background information to add; the problem statement here is relatively elementary, it’s just the answer that is surprising.

Lamport later wrote another paper, Buridan’s principle, about other instances of the same problem outside of CS. Like the Glitch paper, he ran into problems getting it published, so again I’m linking to the publications page, which has his notes.

I will quote this part of the notes on “Buridan’s principle”, if you’re not already curious:

My problems in trying to publish this paper and [“On The Glitch Phenomenon”] are part of a long tradition. According to one story I’ve heard (but haven’t verified), someone at G. E. discovered the phenomenon in computer circuits in the early 60s, but was unable to convince his managers that there was a problem. He published a short note about it, for which he was fired. Charles Molnar, one of the pioneers in the study of the problem, reported the following in a lecture given on February 11, 1992, at HP Corporate Engineering in Palo Alto, California:

One reviewer made a marvelous comment in rejecting one of the early papers, saying that if this problem really existed it would be so important that everybody knowledgeable in the field would have to know about it, and “I’m an expert and I don’t know about it, so therefore it must not exist.”

Papers I like (part 3)

Continued from part 2.

12. Chazelle-“Filtering search: a new approach to query-answering” (1986; computational geometry/information retrieval)

The part I specifically recommend is sections 1 through 3. For most of these problems, there are simpler solutions using different data structures by now (especially via the persistent search trees we saw earlier in part 1), so don’t hurt yourself with the parts dealing with the hive graph; but it’s definitely worth your time to grok these first few sections.

Filtering search deals with “retrieval”-type problems: you have a set of objects, and you want to run a query that finds the subset of those objects that matches some given criterion. In the interval overlap problem the paper uses as its first example, the set of objects is a bunch of 1-dimensional intervals:

A few 1D intervals

I’m drawing each interval in its own line so overlaps don’t get messy, but that’s just a visualization aid; this is a 1D problem. Our queries are going to be “report all intervals in the database that intersect a given query interval”.

How would you set up a data structure that answers these queries efficiently?

A natural enough approach is to chop our 1D x-axis into slices, creating a new “window” (that’s the terminology the paper uses) whenever an interval begins or ends:

1D intervals, sliced

Because the begin and end points of intervals are the only locations when the answer to “which intervals overlap a given x-coordinate” change, the answer is the same within each window. Therefore, if we compute this partition in advance and store it in say a search tree (or an external search tree like a database for bigger sets), and each window stores a list of which intervals overlap it, we can answer one of our original questions directly: given a 1D point, we can find all the intervals overlapping it by finding out which of the windows it overlaps by looking it up our search tree, and then reporting the stored list.

Furthermore, given a solution for point queries, we can take a first stab at solving it for interval queries: find all the windows overlapping our query interval, and then return the union of all the lists for the individual windows. Computing unions of those sets can be done in linear time if we store the data appropriately: for example, if we give each interval stored in our database some unique integer ID and make sure all our lists of “intervals overlapping this window” are sorted by the interval IDs, then the union between a m-element list and a n-element list can be computed in O(n+m) time (using a variant of the standard list merging algorithm used in mergesort).

It’s fairly easy to convince yourself that this works, in the sense that it gives the correct answer; but is it efficient? How big can a database for n intervals get? This matters not just for storage efficiency, but also for queries: if the user queries a gigantic interval (say encompassing all intervals in the image above), we end up computing the union of all interval lists stored in the entire database. We know the final answer can only contain at most n intervals (because that’s how many are in the database), but do we have any nice bounds on how long it will take to determine that fact?

Alas, it turns out this approach doesn’t really work for our problem. Without further ado, here’s an example set that kills this initial approach for good:

n intervals in a problematic configuration

Here, we have 5 intervals with the same center: the first one has a width of 1 unit, the second is 3 units wide, the third 5 units wide, and so on. The middle window overlaps all 5 intervals; the two windows to either side overlap 4 intervals each, all but the first; the next windows on either side overlap 3 intervals each, and so forth.

Therefore, the bottom-most interval gets stored in 1 list; the one up from it gets stored in 3 lists; the next one up gets stored in 5 lists, and so on. For our 5 input intervals, the total size of the lists sums to 1+3+5+7+9 = 25, and in general, 1+3+5+\hdots+(2n-3)+(2n-1) = n^2. So our database will end up with O(n^2) size just to store the sorted lists, and if someone executes a query overlapping all n intervals, we will iterate over that entire database and try to merge n2 list elements to produce a final result set with n intervals in it. No good. And note that even if the query part was nice and fast, having to build a O(n^2)-sized search structure for n elements would make this completely impractical for large data sets.

Now, on to the actual paper: Chazelle points out that in the entire class of retrieval problems, if the input size is n, and the number of elements reported for a particular query is k, the worst-case time for that query will in general be of the form O(k + f(n)) where f is some (hopefully slow-growing, say logarithmic) function of n. This is because reporting a given result is not free; it’s generally an O(1) operation.

Consider extreme cases like our “return me the entire universe” range query: in that case, we have k=n, and if we have say f=\log, the resulting time complexity for that query is going to be O(n + \log(n)) = O(n); if we ask for the entire database, it really doesn’t make a big difference how smart our indexing structure is (or isn’t). The total operation cost is going to be dominated by the “reporting” portion.

This revised cost estimate tells us where we were going wrong with the structure we were building before. It’s important to be able to locate regions of the database overlapped by only a few intervals quickly. But when we have many “active” intervals, the cost of reporting them exceeds the cost of finding them anyway, and caching all those pre-made lists does more harm than good!

Instead, what we want to do is adapt the data structure to the size of the answer (the number of results given). In regions where there are only a few active intervals, we want to locate them quickly; where there’s many active intervals, we don’t want to make a new list of intervals every time a single one of them begins or ends, because when we do, we’ll waste most of our query time merging almost-identical lists.

And that, finally, brings us to filtering search: instead of storing a new list every time the answer changes, we adapt the frequency with which we store lists to the number of active intervals. Once we do this, we need to do a bit more processing per list entry: we need to check whether it actually overlaps our query point (or interval) first – but that’s an O(1) check per list entry (this is the “filtering” part of “filtering search” – rejecting the false positives).

In return, we get a significantly smaller database. Using the scheme described in section 3 of the paper (briefly: make the number of intervals per window proportional to the lowest number of active intervals at any point in the window), our data structure for n intervals takes O(n) space and gives O(k + \log n) query time (and with simple construction and search algorithms too, no hidden huge constant factors). And in particular, if we know that the entire database has size proportional to n, not only do we know that this will scale just fine to large data sets, it also means we won’t hit the “lots of time spent pointlessly merging lists” problem in our original approach.

That’s why I recommend this paper: the underlying insight is easy to understand – we can afford to be “sloppy” in areas where we’ll spend a lot of time reporting the results anyway – and the resulting algorithm is still quite simple, but the gains are significant. And as Chazelle notes, even if you don’t care about the particular problems he talks about, the overall approach is applicable to pretty much all retrieval problems.

13. McIllroy – “A Killer Adversary for Quicksort” (1999; algorithms/testing)

This paper is short (4 pages only!) and sweet. It’s not news that Quicksort will go quadratic for some inputs. Nor is it news how to write a mostly-Quicksort that avoids the quadratic case altogether; that was solved by Musser’s Introsort in 1997 (if you don’t know the trick: limit the recursion depth in Quicksort to some constant factor of a logarithm of the input size, and when you get too deep, switch to Heapsort).

The cool thing about this paper is that, given any Quicksort implementation that conforms to the C standard library qsort interface (the relevant part being that the sorting function doesn’t compare the data directly and instead asks a user-provided comparison function), it will produce a sequence that makes it go quadratic. Guaranteed, on the first try, under fairly mild assumptions, as long as the function in question actually implements some variant of Quicksort.

As-is, this paper is obviously useful to implementers of Quicksort. If you write one, you generally know that there are inputs that result in quadratic run time, but you might not know which.

But more generally, I think this paper is worth reading and understanding because it shows the value of adversarial testing: the solution isn’t hard; it relies on little more than knowing that a Quicksort proceeds as a sequence of partitioning operations, and an individual partition will compare all remaining elements in the current list against the chosen pivot. Treating the rest of the code as a black box, it turns out that knowing this much is basically enough to defeat it.

The details are different every time, but I found the paper quite inspiring when I first read it, and I’ve gotten a lot of mileage since out of building test cases by trying to make an algorithm defeat itself. Even (particularly?) when you don’t succeed, it tends to deepen your understanding.

14. Knuth – “Structured Programming with go to Statements” (1974; programming languages)

I’ll write “go to statements” as gotos in the following, since that’s the spelling we mostly have these days.

This paper is the primary source of the (in)famous “premature optimization is the root of all evil” quote. So to get that out of the way first, it’s on page 8 of the PDF, printed page number 268, right column, second paragraph. I recommend reading at least the preceding 2 paragraphs as well, as well as the paragraph that follows, before quoting it at others. It’s definitely different in context.

Zooming further out, this is a long paper covering a wide range of topics, a lot of which might not interest you, so I’ll go over the structure. It starts by recounting a lot of 1970s arguing back and forth about structured programming that’s mainly of historical interest these days; structured control flow constructs are a done deal these days.

After that, he goes over several algorithms and discusses their use of goto statements, which might or might not interest you.

One section that is definitely worthwhile if you haven’t seen it before is the section on “Systematic Elimination” starting on page 13 in the PDF (printed page number 273) which explains ways of converting any goto-ridden program into one that uses purely structured control flow and gives pointers to the underlying theory – note that some of these ways are quite anticlimactic!

The sections after are, again, interesting archaeology from a time before break and continue statements existed (the “structured” gotos we now generally consider unproblematic).

Then we get into “2. Introduction of goto statements”, which again is nowadays mostly standard compiler fare (covering certain control flow transformations), but worth checking out if you haven’t seen it before. Note that most modern compilers eventually transform a program into a set of basic blocks with an associated control-flow graph, which is essentially lists of statements connected by gotos; it turns out that such a (more expressive) notation makes control flow transformations simpler, and passes that need more structure generally recover it from the control flow graph. Thus, contrary to popular belief, such compilers don’t care all that much whether you write say loops using built-in loop constructs or via ifs and gotos. It does, however, make a difference semantically for objects with block-scoped lifetimes in languages that have them.

The final section I’ll point out is “Program Manipulation Systems”, starting on page 22 in the PDF (printed page number 282). Modern optimizers implement many more transforms than 1970s-era compilers do, yet still I feel like the problem Knuth observed in the 1970s still exists today (admittedly, speaking as a fairly low-level programmer): many of the transforms I wish to apply are quite mechanical, yet inexpressible in the source language. Various LISP dialects probably get closest; the metaprogramming facilities of more mainstream languages generally leave a lot to be desired. I still feel like there’s a lot of potential in that direction.

15. Bryant-“Graph-Based Algorithms for Boolean Function Manipulation” (1986; data structures)

For a long time, this used to be one of the widest-cited papers in all of CS. Not sure if that’s still the case.

It introduces a data structure properly called a Reduced Ordered Binary Decision Diagram (ROBDD), often just called BDDs, although there exist other variants that are not ordered or reduced, so there’s some potential for confusion. Watch out! That said, I’ll just be writing BDD for the remainder; note that I’m always talking about the ROBDD variant here.

So what is a BDD? It’s a data structure that encodes a Boolean truth table as a directed graph. It’s “reduced” because identical subgraphs are eliminated during construction, and “ordered” because the resulting BDD is specific to a particular ordering of input variables. The paper gives some examples.

Why does anyone care? Because truth tables have exponential size, whereas BDDs for many functions of interest have very tractable size. For example, an important result is that BDDs for the outputs of binary adders of arbitrary widths (with the right variable ordering) have a linear number of nodes in the width of the adder. And a BDD shares many of the properties of a truth table: it’s easy to evaluate a function given as a BDD, it’s easy to build them incrementally out of a description of some logic function, and they’re canonical – that is, with a given variable ordering, there is exactly one ROBDD that represents a given function. Which in turn means that we can check whether two binary functions are identical by checking whether their BDDs agree.

This last property is why BDDs were such a hot topic when they were introduced (leading to the aforementioned high citation count): they are very handy for formal verification of combinational logic and other problems arising in hardware design.

For example, say you have a clever 64-bit adder design you wish to implement, but you need to prove that it gives the correct result for all possible pairs of 64-bit input numbers. Checking the proposed design by exhaustively testing all 2128 possible inputs is out of the question; even if had a machine that managed to verify over a trillion combination per second, say 240 of them, and we had over a million such machines, say 220 of them, we’d still have to wait 268 seconds to get the result – that’s about 9.36 trillion years, 680 times the age of the universe.

This is not going to work. But luckily, it doesn’t need to: we can build a BDD for the new adder design, another BDD for a simple known-good adder design (say a ripple-carry adder), and check whether they agree. That validation, written in interpreted Python and run on a single workstation, takes just a moment.

BDDs aren’t a panacea; for example, BDDs for multiplier circuits have exponential size, so formally verifying those isn’t as easy. As far as I know, modern tools combine different (non-canonical) representations such as and-inverter graphs with SMT solvers to tackle more difficult Boolean equivalence problems.

Nonetheless, BDDs are a simple, elegant data structure that immediately solved a serious practical problem, and they’re worth knowing about.

Papers I like (part 2)

Continued from part 1.

Once I was about a thousand words into describing background for GEMM, it became pretty clear that it made more sense to group the numerical math papers into one post, so here goes the (out-of-order) numerical linear algebra special issue.

11. Goto, van de Geijn-“Anatomy of high-performance matrix multiplication” (2008; numerical Linear Algebra / HPC)

You might wonder: why do we care about matrix multiplication in particular so much? Who is it who’s doing these giant matrix multiplies? If you’re a customer of a linear algebra library, it’s not unlikely that you’re never calling GEMM (GEneral Matrix Multiply, the customary name for matrix multiply kernels, thanks to FORTRAN 77’s 6-character limit for function names) at all. So what gives?

Well, if you’re calling into a linear algebra library, odds are you want to solve a linear system of equations (which usually turns into a pivoted LU decomposition plus a solve step), a linear least-squares problem (depending on the problem and your accuracy requirements, this might turn either into a Cholesky decomposition or a QR decomposition, again followed by a solve step), or you want something fancier like the SVD (yet another matrix decomposition, and you probably still eventually want to solve a linear system – but you’re using the SVD because it’s noisy or ill-conditioned and you want to munge around with the singular values a bit).

What’s with all the focus on “decompositions”? Are numerical analysts secretly the world’s most pocket-protected goth band? No: a matrix decomposition (or factorization) expresses a more general matrix A as the product of several special matrices that have some desirable structure. For example, the LU decomposition turns our general matrix into a product A=LU where L is a unit lower triangular matrix and U is upper triangular (note: I’ll be ignoring pivoting in this post for simplicity). The LU decomposition is the industrial-strength counterpart of the Gaussian Elimination process you might have learned in school, but with some side benefits: you can decompose the matrix once and then reuse the factorization multiple times if you’re solving the same system many times with different right-hand sides (this is common in applications), and it also happens to be really nice to have the whole process in a form that can be manipulated algebraically, which we’ll be using in a moment.

But why does this decomposition help? Well, suppose we have a toy system with 3 equations and 3 unknowns, which can be written as a matrix equation Ax = b where A is a 3×3 matrix of coefficients and x and b are 3-element column vectors. If we have a LU decomposition for A, this turns into

Ax = LUx = \begin{pmatrix} 1 & 0 & 0 \\ l_{21} & 1 & 0 \\ l_{31} & l_{32} & 1 \end{pmatrix} \begin{pmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} b_1 \\ b_2 \\ b_3 \end{pmatrix}

How does this help? Well, we can’t solve the full thing yet, but we now have two fairly simply matrices. For now, let’s focus on the left matrix and treat y = Ux as an unknown:

LUx = Ly = \begin{pmatrix} 1 & 0 & 0 \\ l_{21} & 1 & 0 \\ l_{31} & l_{32} & 1 \end{pmatrix} \begin{pmatrix} y_1 \\ y_2 \\ y_3 \end{pmatrix} = \begin{pmatrix} b_1 \\ b_2 \\ b_3 \end{pmatrix}

Well, this one’s easy: the first row just states that y_1 = b_1. The second row states that l_{21} y_1 + y_2 = b_2, and we know everything but y_2, so we can rearrange this to y_2 = b_2 - l_{21} y_1. With this, the final row l_{31} y_1 + l_{32} y_2 + y_3 = b_3 poses no problems either, yielding y_3 = b_3 - l_{31} y_1 - l_{32} y_2. So y_1 just falls out, given y_1 we can compute y_2, and given both we can compute y_3. This is called “forward substitution”. Note that we’re just computing y = L^{-1} b here. However, we’re never forming the inverse of L explicitly! This is important. In numerical LA, when you see an inverse, that means you’re supposed to use the corresponding “solve” routine. Actually computing the inverse matrix is generally both inefficient and inaccurate and to be avoided whenever possible.

Anyway, now that we have y, we can write out the y = Ux we defined it as, and use that to solve for the x we actually wanted:

Ux = \begin{pmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} y_1 \\ y_2 \\ y_3 \end{pmatrix}

This time, we’re going backwards: u_{33} x_3 = y_3 \Leftrightarrow x_3 = y_3 / u_{33}, u_{22} x_2 + u_{23} x_3 = y_2 \Leftrightarrow x_2 = (y_2 - u_{23} x_3) / u_{22}, and u_{11} x_1 + u_{12} x_2 + u_{13} x_3 = y_1 \Leftrightarrow x_1 = (y_1 - u_{12} x_2 - u_{13} x_3) / u_{11}. You will not be surprised to learn that this is called “backwards substitution”. Again, we’re just calculating x = U^{-1} y, which does not actually use a matrix inversion when U is triangular.

And that’s how you solve a linear system given a LU decomposition. In BLAS-ese, solving a triangular system using forwards or backwards substitution for one right-hand side is called a TRSV (TRiangular Solve for a single Vector) – that single routine handles both types. It’s what’s called a level-2 BLAS operation. Level-1 operations are between two vectors, level-2 operations work on a matrix and a vector, and level-3 operations work on two matrices. More about “levels” in a bit.

That’s all dandy, but what does any of this have to do with GEMM? Hang on, we’re getting close. Let’s first generalize slightly: what if we want to solve multiple systems with the same A all at once? Say we want to solve two systems

A x^1 = b^1 \\ A x^2 = b^2

at once (using superscripts to denote the separate vectors, since I’m already using subscripts to denote components of a vector or matrix). It turns out that you can just write this as a single matrix equation

AX = A \left(\begin{array}{c|c} x^1 & x^2 \end{array}\right) = \left(\begin{array}{c|c} b^1 & b^2 \end{array}\right) = B

where we just group the column vectors for x into one matrix X, and the column vectors for b into another matrix B. Again we can solve this for a LU-decomposed A by forward and back substitution (remember, still not actually forming inverses!)

Y = L^{-1} B \\ X = U^{-1} Y

note that we already know one direct way to do this type of equation: loop over the columns of X (and B) and solve them one by one, as above. This kind of operation is called a TRSM: TRianguler Solve for Multiple right-hand sides, or TRiangular Solve for Matrix, our first level-3 BLAS operation.

Just to get used to the idea of dealing with multiple right-hand sides at once, let’s write down the full matrix equation form for a 6 equations, 6 unknowns unit lower triangular system with two separate right-hand sides explicitly:

\begin{pmatrix} 1 & & & & & \\ l_{21} & 1 & & & & \\ l_{31} & l_{32} & 1 & & & \\ l_{41} & l_{42} & l_{43} & 1 & & \\ l_{51} & l_{52} & l_{53} & l_{54} & 1 & \\ l_{61} & l_{62} & l_{63} & l_{64} & l_{65} & 1 \end{pmatrix} \begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ x_{31} & x_{32} \\ x_{41} & x_{42} \\ x_{51} & x_{52} \\ x_{61} & x_{62} \end{pmatrix} = \begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \\ b_{31} & b_{32} \\ b_{41} & b_{42} \\ b_{51} & b_{52} \\ b_{61} & b_{62} \end{pmatrix}

As before, the first row tells us that \begin{pmatrix} x_{11}&  x_{12} \end{pmatrix} = \begin{pmatrix} b_{11} & b_{12} \end{pmatrix}; the second row mutliplied out gives l_{21} \begin{pmatrix} x_{11} & x_{12} \end{pmatrix} + \begin{pmatrix} x_{21} & x_{22} \end{pmatrix} = \begin{pmatrix} b_{21} & b_{22} \end{pmatrix}, and so forth, which we solve the exact same way as before, only now we’re always multiplying (and summing) short row vectors instead of single scalars.

But 6×6 is still really small as far as real-world systems of equations go and this is already getting really unwieldy. It’s time to chop the matrix into pieces! (You can always do that, and then work on blocks instead of scalars. This is really important to know.) Let’s just draw some lines and then cut up the matrices involved into parts:

\left(\begin{array}{cc|cccc} 1 & & & & & \\ l_{21} & 1 & & & & \\ \hline l_{31} & l_{32} & 1 & & & \\ l_{41} & l_{42} & l_{43} & 1 & & \\ l_{51} & l_{52} & l_{53} & l_{54} & 1 & \\ l_{61} & l_{62} & l_{63} & l_{64} & l_{65} & 1 \end{array}\right) \begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \hline x_{31} & x_{32} \\ x_{41} & x_{42} \\ x_{51} & x_{52} \\ x_{61} & x_{62} \end{pmatrix} = \begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \\ \hline b_{31} & b_{32} \\ b_{41} & b_{42} \\ b_{51} & b_{52} \\ b_{61} & b_{62} \end{pmatrix}

turns into the matrix equation

\begin{pmatrix} L_{11} & 0 \\ L_{21} & L_{22} \end{pmatrix} \begin{pmatrix} X_1 \\ X_2 \end{pmatrix} = \begin{pmatrix} B_1 \\ B_2 \end{pmatrix}

where L_{11} and L_{22} are unit lower triangular, and L_{21} is just a general matrix. If we just multiply the matrix product out by blocks (again, the blocks behave like they’re scalars in a larger matrix, but you need to make sure the matrix product sizes match and be careful about order of multiplication because matrix multiplication doesn’t commute) we get two matrix equations:

L_{11} X_1 = B_1 \\ L_{21} X_1 + L_{22} X_2 = B_2

The first of these is just a smaller TRSM with a 2×2 system, and in the second we can bring the X_1 term to the right-hand side, yielding

L_{22} X_2 = B_2 - L_{21} X_1

On the right-hand side, we have a matrix multiply of values we already know (we computed X_1 with the smaller TRSM, and everything else is given). Compute the result of that, and we have another TRSM, this time with a 4×4 system.

The matrix multiply here is one instance of a GEneral Matrix Multiply (GEMM). The corresponding BLAS function computes C \leftarrow \alpha AB + \beta C, where the left arrow denotes assignment, A, B, and C are matrices, and α and β are scalar values. In this particular case, we would have A=L_{21}, B=X_1, C=B_2, \alpha=-1 and \beta=1.

So we cut the matrix into two parts, did a bit of algebra, and saw that our TRSM with a 6×6 L can be turned into a 2×2 TRSM, a GEMM of a 4×2 by a 2×2 matrix, and finally a 4×4 TRSM. Note the function of the matrix multiply: once we’ve computed two unknowns, we need to subtract out their contributions from every single equation that follows. That’s what the GEMM does. It’s the first matrix multiply we’ve seen, but does it matter?

Well, the next thing to realize is that we can do the splitting trick again for the 4×4 TRSM, turning it into 2 even smaller TRSM, plus another GEMM. But now that we’ve establishes the idea of using blocks, let’s skip to a somewhat more serious problem size, so it becomes clear why this is interesting.

Let’s say our A is 1000×1000 (so 1000 equations in 1000 unknowns); its LU factors are the same size. This time, let’s say we’re doing 20 right-hand sides at once, and working in blocks of 30×30. We have the same equation as before:

\begin{pmatrix} L_{11} & 0 \\ L_{21} & L_{22} \end{pmatrix} \begin{pmatrix} X_1 \\ X_2 \end{pmatrix} = \begin{pmatrix} B_1 \\ B_2 \end{pmatrix}

but this time L_{11} is 30×30 unit lower triangular, L_{21} is 970×30, L_{22} is 970×970 unit lower triangular, X_1 and B_1 are 30×20, and B_1 and B_2 are 970×20. Again we do the same 3 steps:

  1. X_1 \leftarrow L_{11}^{-1} B_1 (TRSM, 30×30 matrix times 30×20 RHS)
  2. B_2 \leftarrow B_2 - L_{21} X_1 (GEMM, 970×30 times 30×30)
  3. X_2 \leftarrow L_{22}^{-1} B_2 (TRSM, 970×970 times 970×30)

Now the computational cost of both a m×n-by-n×p TRSM and a m×n-by-n×p GEMM (the middle dimensions always have to match) are both roughly 2×m×n×p floating-point operations (flops, not to be confused with all-uppercase FLOPS, which conventionally denote flops/s, because nomenclature is stupid sometimes). Which means the first step above (the medium TRSM) has on the order of 1800 flops, while the second step (the GEMM) takes 873000 flops. In short, of these two steps, step 1 is basically a rounding error in terms of execution time.

And note that we’re splitting a large × TRSM into a medium × medium TRSM, a large × small GEMM, and a final large × large (but smaller than the original) TRSM. And we can keep doing the same splitting process to that remaining large TRSM, until it becomes small as well. In short, this process allows us to turn a large TRSM into a sequence of medium-size TRSM (always the same size), alternating with large-size GEMMs (which keep getting smaller as we proceed down). And what happens if you look at the matrix as a whole is that we end up doing a small amount of actual TRSM work near the diagonal, while the rest of the matrix gets carpet-bombed with GEMMs.

In short, even though what we wanted to do was solve a pre-factored linear systems for a bunch of different right-hand sides, what the computer actually ended up spending its time computing was mostly matrix multiplies. The GEMM calls are coming from inside the solver! (Cue scary music.)

Alright. At this point you might say, “fair enough, that may indeed be what happens if you use this TRSM thing that for all we know you just made up, but I for one am not ever asking the computer to solve the same equation with 50 different right-hand sides in a batch, so how does this apply to me?” Okay then, let’s have a look at how LU factorizations (which so far I’ve assumed we just have lying around) are actually computed, shall we? (And again, note I’m ignoring pivoting here, for simplicity.)

What we want to do is factor our matrix A into a unit lower triangular and an upper triangular factor:

A = LU

So, how do we do that? Just keep staring at that equation for a minute longer, see if it flinches first! It doesn’t? Bugger. Okay, plan B: apply our new favorite trick, splitting a matrix into blocks, to play for time until we get a brilliant idea:

\begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix} = \begin{pmatrix} L_{11} & 0 \\ L_{21} & L_{22} \end{pmatrix} \begin{pmatrix} U_{11} & U_{12} \\ 0 & U_{22} \end{pmatrix}

Our top-left block A_{11} needs to be square (same number of rows as columns), else this isn’t right, but it can be any size. This makes A_{22} square as well, and the other blocks are rectangular. The zeros are there because we want L and U to be lower and upper triangular respectively, and their entire top-right respectively bottom-left blocks better be all zero. Furthermore, L_{11} and L_{22} are also unit lower triangular (like the bigger L we carved them out of), and likewise U_{11} and U_{22} are upper triangular. About the remaining L_{21} and U_{12}, we can’t say much.

Still drawing blanks on the ideas front. But let’s just keep going for now: if we multiply out that matrix equation, we get

A_{11} = L_{11} U_{11} \\ A_{12} = L_{11} U_{12} \\ A_{21} = L_{21} U_{11} \\ A_{22} = L_{21} U_{12} + L_{22} U_{22}

Wait a second. That first line is a smaller LU decomposition, which we’re trying to figure out how to compute. But suppose we knew for now, and we had something that gave us L_{11} and U_{11}. Then that second line is really just U_{12} = L_{11}^{-1} A_{12}. That’s a TRSM, we just went over that. And the third line is L_{21} = A_{21} U_{11}^{-1}, which is also a TRSM (of a shape we haven’t seen before, but it works the same way). Once we have L_{11} and U_{11}, our hand is forced with regards to these two matrices; for the factorization to multiply to the correct result A, we need them to be the things I just wrote down. And if we know these two matrices, we can attack that last equation by moving their product to the left-hand side:

A_{22} - L_{21} U_{12} = L_{22} U_{22}

Hey look, we do a big GEMM and then resume with computing a LU decomposition of the remainder – we’ve seen that kind of structure before! Great. This is how to do a block LU decomposition: compute a LU decomposition of the top-left block, two TRSMs, one GEMM to update the bottom-right part, then keep decomposing that. And this time the TRSMs are on medium × medium × large problems, while the GEMM is on large × medium × large, so again the bulk of the computation is going to be spent in the GEMM part.

But we still don’t know how to compute the LU decomposition of that top-left block. No worries: if in doubt, go for a cheap shot. We don’t know how to do this for an arbitrary block. But what if we make our partition really silly? For A_{11} is a 1×1 element “matrix” levels of silly? (That is, we’re just splitting off one row and one column at the top left.)

Then A_{11} = L_{11} U_{11} is a no-brainer; all three of these matrices are 1×1, and we require L_{11} to be “unit” (ones on the diagonal), which for a 1×1 matrix just means that L_{11}=1. Therefore A_{11} = U_{11}. Ta-daa! We “solved” a 1×1 LU decomposition. But that’s all we really need. Because once we have that one value U_{11} determined, we can crank through our other 3 formulas, which give us U_{12} (the rest of the top row of U), L_{21} (the rest of the left column of L), and updates the rest of the matrix to eliminate the one variable we just computed. To compute a LU decomposition of a block, we simply keep peeling off 1×1 sub-blocks until we run out of matrix to decompose.

This description covers both “regular” and block LU decompositions (in fact we just do blocks and then get the regular decomposition as a special case when we do 1×1 blocks, at which point the problem becomes trivial), and not a single index or elementary row operation was harmed in the preceding text.

Note that this time, we turned LU decomposition (i.e. Gaussian elimination) into mostly-GEMMs-and-some-block-TRSMs, and we already saw earlier that block TRSMs turn into mostly-GEMMs-and-some-small-TRSMs. Therefore, the entire process of factoring a linear system and then solving it turns into… mostly GEMMs.

And that’s why everyone cares about GEMMs so much. (And also, you may now see why even if you don’t use TRSMs, math libraries still include them, because the solvers your app code calls want to call them internally!)

This pattern is not just specific to Gaussian Elimination-type algorithms, either. Block Householder for QR decompositions? Heavy on GEMMs. Hessenberg reduction for Eigenvalue problems? Basically Householder, which is mostly GEMMs. Computation of the Singular Value Decomposition (either for solvers or to get PCAs)? Generally starts with Golub-Kahan Bidiagonalization or one of its relatives, which is a somewhat fancier version of the QR decomposition algorithm, and yup, lots of GEMMs again. Then the actual singular value computation is iterative on that bidiagonal matrix, but that part tends to take less time than the non-iterative parts surrounding it, because the iteration is only iterating on a matrix reduced to 2 diagonals, whereas everything else works with the whole matrix.

In fact, what we’ve seen so far is a pattern of various matrix operations turning into smaller versions of themselves, plus maybe some other matrix operations, plus a GEMM. Guess what happens with a GEMM itself? If your guess was “GEMMs all the way down”, you’re right. It’s like a weed. (And turning GEMMs into smaller GEMMs is, in fact, quite important – but that’s in the paper, so I won’t talk about it here.)

This concludes our brief foray into dense numerical LA and why HPC people are so obsessed about GEMM performance. Note that dense problems are basically the easy case, at least from a high-level point of view; many of the things that are really interesting are huge (millions of equations and variables) but sparse and with exploitable structure, and these take a lot more care from the user, as well as more sophisticated algorithms. (That will nevertheless usually end up calling into a dense LA library for their bulk computations.)

Now that I’ve hopefully satisfyingly answered why GEMM, let’s talk a bit about the actual paper. The presentation I gave you of splitting up a matrix into blocks wasn’t just for notational convenience; that’s how these algorithms tend to work internally. The reason is that large matrices are, well, large. There’s an inherent 2D structure to these algorithms and completely looping over one axis of a giant matrix tends to thrash the cache, which in turn means there are suddenly lots of main memory accesses, and at that point you lose, because current computers can do way more computation per unit of time than they can do memory-to-cache transfers. If you truly want to do high-performance computation, then you have to worry about memory access patterns. (In fact, that’s most of what you do.)

That is something I pushed on the stack earlier in this post: different BLAS levels. This is an old chestnut, but it’s worth repeating: level-1 BLAS operations are vector-vector; something like say a big dot product (DOT in BLAS). Doing a dot product between two N-element vectors is on the order of 2N flops, and 2N memory operations (memops) to load the elements of the two vectors. 1:1 flops to memops – no good! Level-2 BLAS operations are matrix-vector; take say M×N matrix by M-element vector multiplication (GEMV, GEneral Matrix times Vector). Does 2MN flops, M×(N+2) memops (load all matrix elements once, load each vector element once, store each vector element once); closer to 2:1 flops to memops, which is an improvement, but still bad if you have the SIMD units to compute 32 single-precision flops per cycle and core and the main memory bandwidth to load half a float per cycle and core (the size of this gap is similar for CPUs and GPUs, for what it’s worth). It just means that your performance drops off a cliff once you’re working on data larger than your cache size.

Level-3 BLAS operations like GEMM, however, have 2MNP flops to MN+NP+MP necessary memops (load each matrix element once, store the result). That means the flops to memops ratio can in principle get arbitrarily large if only the matrices are big enough. Which in turn means that high-performance GEMMs are all about making sure that they do, in fact, reuse matrix elements extensively once they’re in the cache, and making sure that all the different cache levels are happy.

The way this works is interesting and worth studying, and that’s why that paper was on my list. Whew.

31. Bientinesi, van de Geijn-“Formal Correctness and Stability of Dense Linear Algebra Algorithms” (2005; numerical LA)

According to my headings, all of the above was about the matrix multiplication paper. Well, what can I say? I was lying.

That whole business with deriving our LU decomposition by partitioning our matrix into blocks, writing down equations for the individual block elements, and then winging our way towards a solution? That’s, essentially, this paper. Except the real paper is a lot more rigorous and consequently with a lot less “winging it”.

Partitioning matrices into blocks for fun and profit is a numerical linear algebra mainstay. I saw it done for a few algorithms at university. The work of this group at UT Austin (especially the stuff following from this paper) is what made me realize just how general and systematic it can be when it’s done right.

For a large class of dense LA algorithms, this procedure is well-specified enough to derive a working algorithm automatically from a problem description, complete with correctness and numerical stability analysis, within seconds; no inspiration required. It’s an algorithm-derivation algorithm. For a very limited (and fairly rigidly structured) problem domain, but still!

This is really cool and I like it a lot.

Papers I like (part 1)

This started out on Twitter and I expected having to write up maybe 50 or so, but then it made the rounds. I don’t think I have 1200+ papers that I’d recommend reading, and even if I did, I don’t think a reading list of that length is actually useful to anyone.

The original list had tweet-length comments. I have no such space limitation on this blog, so I’ll go ahead and write a few paragraphs on each paper. That does mean that we’re now back to lots of text, which means I’ll split it into parts. Such is life. In this post, I’ll do the first ten papers from my list.

One thing I noticed while writing the list is that the kind of paper I like most is those that combine solid theory with applications to concrete problems. You won’t find a lot of pure theory or pure engineering papers here. This is also emphatically not a list of “papers everyone should read” or any such nonsense. These are papers I happen to like, and if you share some of my interests, I think you’ll find a lot to like in them too. Beyond that, no guarantees.

1. Lamport-“State the Problem Before Describing the Solution” (1978; general)

It’s a one-page memo, and yeah, the gist of it’s already in the title. Read it anyway. This applies to papers and code both.

This is not just a matter of style. If you don’t have a description of the problem independent of your proposed solution, usually the best you can say about your solution (in terms of proofs or invariants) is “it does what I think it does”. In code, the equivalent is a long, complicated process that everybody is afraid of touching and nobody is really sure what it does – just that things tend to break when you touch it.

2. Herlihy-“Wait-free synchronization” (1991; concurrency)

This is probably the most important single paper ever written about shared-memory multiprocessing. When it was written, such systems were nowhere near as common as they are now and each architecture had its own set of atomic synchronization primitives – chosen primarily by whatever the kernel programmers at the respective companies thought might come in handy. Pretty much everyone had a decent way to implement a basic spinlock, but beyond that, it was the wild west.

Herlihy had, in joint work with Wing, formally defined linearizability a few years earlier, in 1987 – and if it feels odd to you that we first got an “industrial-strength” definition of, effectively, atomicity a few decades after the first multi-processor supercomputers appeared, that should tell you everything you need to know about how far theory was lagging behind practice at the time.

That’s the setting for this paper. Mutual exclusion primitives exist and are well-known, but they’re blocking: if a thread (note: this description is anachronistic; threads weren’t yet a mainstream concept) holding a lock is delayed, say by waiting for IO, then until that thread is done, nobody else who wants to hold that lock will make any progress.

Herlihy starts by defining wait-freedom: “A wait-free implementation of a concurrent data object is one that guarantees that any process can complete any operation in a finite number of steps, regardless of the execution speeds of other processors”. (This is stronger than nonblocking aka “lock-free”, which guarantees that some thread will always be making progress, but can have any particular thread stuck in a retry loop for arbitrarily long).

Given that definition, he asks the question “with some atomic operations as primitives, what kinds of concurrent objects (think data structures) can we build out of them?”, and answers it definitively via consensus numbers (short version: virtually all primitives in use at the time can only solve 2-processor consensus wait-free, but compare-and-swap has a consensus number of infinity, i.e. it can be used to achieve consensus between an arbitrary number of processors wait-free).

The paper then goes on to show that, if the consensus number of available primitives is at least as high as the number of processor in a system, then in fact any operation can be done wait-free (universality). Note that this type of universal construction has high overhead and is therefore not particularly efficient, but it’s always possible.

This is a great paper and if you’re interested in concurrency, you should read it.

3. Cook – “How complex systems fail” (1998; complex systems)

This is about all kinds of complex systems, not just computer-related ones. Cook is a MD researching incidents in emergency medicine; this four-page write-up (the fifth page is bibliography) summarizes some of the most salient findings. It’s short, non-technical and relevant to everyone interacting with complex systems in their daily lives (which is, in modern society, everyone).

4. Moffat, Turpin – “On the Implementation of Minimum Redundancy Prefix Codes” (1997; data compression)

My tweet-length summary was “Much has been written about Huffman coding, a lot of it wrong. Almost everything worth knowing is in this (short!) paper”. I have no space limit here, so let me explain what I mean by that: the “folk understanding” of Huffman coding is that you start by building a data structure, the Huffman tree, using Huffman’s algorithm, then you transfer a description of the tree “somehow” (imagine vigorous hand-waving here), and then the encoder produces the bit stream by encoding paths from the root to the leaves (corresponding to individual symbols) in the Huffman tree, and the decoder works by reading a bit at a time and walking down the appropriate branch of the tree, producing the right symbol and going back to the root when it hits a leaf node.

Sounds about right to you? It shouldn’t, because every single part of what I just said is either a bad idea or outright wrong in practice. Let’s start with the trees. You don’t actually want a binary tree data structure for your code – because neither encoder nor decoder should ever be working a bit at a time. In fact, the preferred approach is generally to use canonical Huffman coding which not only never builds a tree, but also doesn’t care about the codes assigned during the original Huffman tree-building process. All it cares about is the code lengths, which it uses to build an equivalent (in terms of compression ratio) encoding that is easier to decode (and also easier to transmit efficiently in the bit stream, because it has fewer redundant degrees of freedom).

You might also not want to use Huffman’s algorithm directly; actual Huffman code words for rare symbols can become really long. In practice, you want to limit them to something reasonable (I don’t know any format offhand that allows codes longer than about 20 bits; 16 bits is a typical limit) to simplify bit IO in the encoder and decoder. So to avoid over-long codes, there’s eiher some post-processing that essentially “rotates” parts of the tree (making some longer codes shorter and some shorter codes longer), or code lengths are constructed using a different algorithm altogether.

This paper gets all these things right, and that’s why it’s the one I would recommend to anyone who wants to learn how to do Huffman coding right. As a final note, let me quote a passage from the conclusion of the paper:

In particular, we observe that explicitly tree-based decoding is an anachronism and usually best avoided, despite the attention such methods have received in textbooks, in the research literature, and in postings to the various network news groups.

It’s been 20 years; this is just as true now as it was then.

5. Dybvig, Hieb, Butler – “Destination-Driven Code Generation” (1990; compilers)

I also linked to Millikin’s “One-pass Code Generation in V8” as a companion piece.

Code generation for a simple stack-based virtual machine is really easy; it’s little more than a post-order traversal of the abstract syntax tree and some label management. If execution speed isn’t much of a concern, at that point you write a bytecode interpreter for said VM and call it a day.

But what if you do care about speed? Well, the next step is to generate actual machine code that still maintains the explicit stack (to get rid of the interpreter loop). Once you do that, it tends to become clear how much redundant stack manipulation and data movement you’re doing, so if you’re still not satisfied with the speed, what are you to do?

The most common answers are either to add some local smarts to the code generator, which gets messy alarmingly quickly (see the Milliken presentation for examples), or to give up and go for a “real” optimizing compiler back end, which is orders of magnitude more code and complexity (and has, not surprisingly, much higher compile times).

DDCG is a great compromise. It’s good at avoiding most of the redundant data movement and unnecessary control flow, while still fitting nicely within a simple one-pass code generator. That makes it a popular technique for JITs that need to generate passable machine code on tight deadlines.

6. Valmari – “Fast brief practical DFA minimization” (2012; theoretical CS/automata theory)

Take a n-state DFA with m transitions and an input alphabet of size k as input, and produce the corresponding minimal (by number of states) DFA – i.e. an equivalent DFA that is smaller (or at worst the same size as the input).

This is a classic automata theory problem; it’s relatively straightforward to design an algorithm that works in O(kn^2) time (usually credited to Moore), and there’s a 1971 algorithm by Hopcroft that solves the problem in O(nk \log n) time. So why am I linking to a 2012 paper?

Hopcroft’s algorithm is complicated. Textbooks tend to omit it entirely because its correctness proof and run-time analysis are considered too hard. Hopcroft’s 1971 paper was followed by a 1973 paper by Gries, “Describing an algorithm by Hopcroft”, that aims to improve the exposition. Nearly 30 years later, Knuutila wrote another follow-up paper, “Re-describing an algorithm by Hopcroft”, that still bemoans the lack of a readable exposition and gives it another go, spending 31 pages on it, and showing along the way that Hopcroft and Gries do not actually describe quite the same algorithm – and that his new algorithm is slightly different still (intentionally this time, to make it somewhat easier to analyze).

This is not what successful replication looks like, and a profoundly unsatisfying state of affairs for such a relatively fundamental (and clean) problem.

And that’s why I’m linking to Valmari’s 2012 paper. This paper, unlike most of the previous ones, works correctly with partial DFAs (i.e. those that don’t have transitions defined for every (state,input) pair), includes working code in the body of the paper (about 130 lines of C++, formatted for 42-char lines to fit in the two-column limit, resulting in one-letter variable names for everything; correspondingly it’s quite dense, but readable enough once you get used to the names), and takes a total of 5 pages for the algorithm description, correctness proof, and run-time analysis – the latter is O(n + m \log m); since m \le nk, this is as good as Hopcroft’s algorithm.

The algorithm itself is rather pretty too, working by alternatingly refining a partition on the states and the transitions of the input DFA in a pleasantly symmetric way. The data structures are simple but clever.

It’s taken an extra 41 years, but I think we can actually consider this problem truly solved now.

7. Sarnak, Tarjan – “Planar Point Location Using Persistent Search Trees” (1986; computational geometry/data structures)

Planar point location (locating which polygon of a polygonal subdivision of the plane a query point falls in) is an interesting problem in its own right, but in retrospect, it’s just side dish. What this paper really does is introduce space-efficient persistent search trees, using the “fat node” method. It’s a simple and very versatile idea (section 3 of the paper).

With persistent search trees, building the point location structure itself is a straightforward sweep: do a plane sweep from left to right, keeping track of line segments of the polygonal subdivision currently intersecting the sweep line. Their relative ordering only changes at vertices. The algorithm simply keeps track of the currently active line segments in a persistent red-black tree. Once the sweep is done, the persistent tree encodes all “slabs” between line segments (and thus the polygons between them) for arbitrary x coordinates, and can be queried in logarithmic time. It’s a textbook example of good data structures making for good algorithms.

8. Porter, Duff – “Compositing Digital Images” (1984; computer graphics)

The paper that introduced, well, digital image compositing, premultiplied (also known as “associated”) alpha, and what is now known as the “Porter-Duff operators” (“over” being the most important one by far). All these concepts are fairly important in image processing; sadly the paper under-sells them a bit.

In particular, premultiplied alpha is introduced as little more than a hack that saves a fair amount of arithmetic per pixel. While true, this is one of those cases where applications were ahead of the theory.

It turns out that a much stronger argument for premultiplication isn’t that it saves a bit of arithmetic, but that it solves some important other problems: when image data is stored in pre-multiplied form, (linear) filtering and compositing operations commute – that is, filtering and compositing can be exchanged. This is not the case with non-premultiplied alpha. In particular, scaling non-premultiplied images up or down before compositing tends to introduce fringing at the edges, as color values of transparent pixels (which should not matter) bleed in. Many try to mitigate this by preprocessing say sprite data so that the color values for transparent pixels adjacent to opaque or partially translucent pixels are “close” to minimize the error, but premultiplied alpha is cheaper, simpler to implement and completely free of errors. It’s generally the representation of choice for any linear filtering/interpolation tasks. And it also turns out that compositing operations themselves (if you care about the alpha value of the result) are a lot simpler in premultiplied alpha, whereas non-premultiplied ends up with various complications and special cases.

A good read on these issues are Jim Blinn’s columns in IEEE Computer Graphics and Applications, specifically those on Image Compositing, which are unfortunately pay-walled so I can’t link to them . They are collected in his book “Dirty Pixels” though, which is a recommended read if you’re interested in the topic.

9. Brandt – “Hard Sync Without Aliasing” (2001; DSP)

Probably the most popular class of synthesizers employs what is called “subtractive synthesis”. The idea is fairly simple: start out with an oscillator generating an “interesting” waveform (one with lots of overtones) at a specified frequency, and then proceed to shape the result using filters, which emphasize some of the overtones and de-emphasize others. It’s called subtractive synthesis because we start out with lots of overtones and then remove some of them; the converse (additive synthesis) builds up the overtone series manually by summing individual sine waves.

Typical good waveforms to use are sawtooth or pulse waves. Generating these seems really simple in code, but there’s a catch: in DSP, we generally represent waveforms by storing their values at regularly spaced discrete sample times; the original continuous signal can be recovered from these samples if and only if it is bandlimited. (This is the famous sampling theorem).

Neither sawtooth waves nor pulse waves are bandlimited. To get a bandlimited (and thus representable) version of them, you have to cut off the overtone series at some point; for sampling, this is the Nyquist frequency (half the sampling rate).

If you don’t, and just sample the continuous non-bandlimited waveforms, the signal you get is similar to what a “proper” bandlimited sawtooth/pulse wave would look like, but it’s got a lot of other noisy junk in it, corresponding to frequencies above Nyquist that weren’t representable and got reflected back into the representable part of the spectrum – this is “aliasing”.

If you’re doing sound synthesis, you want clean signals without such aliasing noise. The problem is that generating them directly is hard in general. Restricted cases (perfect sawtooth or square waves where you never change the frequency or other parameters) are reasonably easy, but that doesn’t help you if you want to tweak knobs describing the shape of the waveform in real time, doing pulse-width modulation, or various other tricks.

One such other trick is “hard sync”; two oscillators, where the first oscillator resets the phase of the second whenever it wraps around. This is done in analog synthesizers and the resulting sound is harmonically quite interesting, but the resulting waveform is complex enough to defeat straightforward attempts at description using additive synthesis or similar.

It would all be much simpler if you could work with the convenient non-bandlimited saw/pulse waves, where everything is easy, and just subtract the aliasing later. Well, turns out that because sampling and everything else involved in this chain is linear, you can in fact do exactly that.

What this paper does is introduce what’s called “MinBLEPs” – minimum-phase BandLimited stEPs. You simply work with the convenient continuous functions, and whenever a discontinuity is introduced, you insert a corresponding MinBLEP, which cancels out the aliasing. It’s a beautiful solution to a long-standing practical problem.

10. Veach – “Robust Monte Carlo Methods for Light Transport Simulation” (1997; computer graphics)

This one’s a PhD thesis, not a paper. I’ve been active in computer graphics for a long time, but there aren’t many graphics papers linked in here. That’s partly because I have many interests outside of CG, but also due to the nature of computer graphics research, which tends to be rapid-paced and very incremental. That makes it hard to single out individual papers, for the most part.

Veach’s PhD thesis sticks out, though. When it was initially published, Veach’s research was groundbreaking. Now, nearing its 20th anniversary (it was published in December of 1997), it’s probably better summarized as “classic”, with important contributions to both theory and algorithms.

This is not light reading, but it’s definitely a must-read for anyone interested in unbiased rendering.

Memory bandwidth

Absolute memory bandwidth figures tend to look fairly large, especially for GPUs. This is deceptive. It’s much more useful to relate memory bandwidth to say the number of clock cycles or instructions being executed, to get a feel for what you can (and can’t) get away with.

Let’s start with a historical example: the MOS 6502, first released in 1975 – 42 years ago, and one of the key chips in the microcomputer revolution. A 6502 was typically clocked at 1MHz and did a 1-byte memory access essentially every clock cycle, which are nice round figures to use as a baseline. A typical 6502 instruction took 3-5 cycles; some instructions with more complex addressing modes took longer, a few were quicker, and there was some overlapping of the fetch of the next instruction with execution of the current instruction, but no full pipelining like you’d see in later (and more complex) workstation and then microcomputer CPUs, starting around the mid-80s. That gives us a baseline of 1 byte/cycle and let’s say about 4 bytes/instruction memory bandwidth on a 40-year old CPU. A large fraction of that bandwidth went simply into fetching instruction bytes.

Next, let’s look at a recent (as of this writing) and relatively high-end desktop CPU. An Intel Core i7-7700K, has about 50GB/s and 4 cores, so if all 4 cores are under equal load, they get about 12.5GB/s each. They also clock at about 4.2GHz (it’s safe to assume that with all 4 cores active and hitting memory, none of them is going to be in “turbo boost” mode), so they come in just under 3 bytes per cycle of memory bandwidth. Code that runs OK-ish on that CPU averages around 1 instruction per cycle, well-optimized code around 3 instructions per cycle. So well-optimized code running with all cores busy has about 1 byte/instruction of available memory bandwidth. Note that we’re 40 years of Moore’s law scaling later and the available memory bandwidth per instruction has gone down substantially. And while the 6502 is a 8-bit microprocessor doing 8-bit operations, these modern cores can execute multiple (again, usually up to three) 256-bit SIMD operations in one cycle; if we treat the CPU like a GPU and count each 32-bit vector lane as a separate “thread” (appropriate when running SIMT/SPMD-style code), then we get 24 “instructions” executed per cycle and a memory bandwidth of about 0.125 bytes per cycle per “SIMT thread”, or less unwieldy, one byte every 8 “instructions”.

It gets even worse if we look at GPUs. Now, GPUs generally look like they have insanely high memory bandwidths. But they also have a lot of compute units and (by CPU standards) extremely small amounts of cache per “thread” (invocation, lane, CUDA core, pick your terminology of choice). Let’s take the (again quite recent as of this writing) NVidia GeForce GTX 1080Ti as an example. It has (as per Wikipedia) a memory bandwidth of 484GB/s, with a stock core clock of about 1.48GHz, for an overall memory bandwidth of about 327 bytes/cycle for the whole GPU. However, this GPU has 28 “Shading Multiprocessors” (roughly comparable to CPU cores) and 3584 “CUDA cores” (SIMT lanes). We get about 11.7 bytes/cycle per SM, so about 4x what the i7-7700K core gets; that sounds good, but each SM drives 128 “CUDA cores”, each corresponding to a thread in the SIMT programming model. Per thread, we get about 0.09 bytes of memory bandwidth per cycle – or perhaps less awkward at this scale, one byte every 11 instructions.

That, in short, is why everything keeps getting more and larger caches, and why even desktop GPUs have quietly started using tile-based rendering approaches (or just announced so openly). Absolute memory bandwidths in consumer devices have gone up by several orders of magnitude from the ~1MB/s of early 80s home computers, but available compute resources have grown much faster still, and the only way to stop bumping into bandwidth limits all the time is to make sure your workloads have reasonable locality of reference so that the caches can do their job.

Final disclaimer: bandwidth is only one part of the equation. Not considered here is memory latency (and that’s a topic for a different post). The good news is absolute DRAM latencies have gone down since the 80s – by a factor of about 4-5 or so. The bad news is that clock rates have increased by about a factor of 3000 since then – oops. CPUs generally hit much lower memory latencies than GPUs (and are designed for low-latency operation in general) whereas GPUs are all about throughput. When CPU code is limited by memory, it is more commonly due to latency than bandwidth issues (running out of independent work to run while waiting for a memory access). GPU kernels have tons of runnable warps at the same time, and are built to schedule something else during the wait; running on GPUs, it’s much easier to run into bandwidth issues.

Rounding up to the nearest integer that’s congruent to k mod N

It’s fairly well-known (among programmers anyway) that say rounding up x to the nearest multiple of 8 can be accomplished using the formula (x + 7) & ~7, and that in general rounding up to the nearest multiple of N (where N is a power of 2) can be accomplished using (x + N - 1) & ~(N - 1). But sometimes you need a slightly generalized version: round up to the nearest value that is congruent to some k \pmod N; for example, this crops up in boundary tag-using memory allocators when the user requests aligned memory. Such allocators put a header before allocated blocks (just before the address returned to the caller). For the user-visible pointer to be aligned by say 32, that header needs to fall at an address that’s off alignment by a specified distance (which brings us to our problem).

It’s not immediately obvious how to adapt the original formula to this case (there is a way; I’ll get to it in a second). Now this is not exactly a frequent problem, nor is there any real need for a clever solution, but it turns out there is a very nice, satisfying solution anyway, and I wanted to write a few words about it. The solution is simply x + ((k - x) & (N - 1)) for power-of-2 N. The basic approach works in principle for arbitrary N, but x + ((k - x) % N) will not work properly in environments using truncated division where taking the modulus of a negative argument can return negative results, which sadly is most of them. That said, in the remainder of this short post I’ll write % N instead of & (N - 1) with a “N needs to be a power of 2” disclaimer anyway, since there’s really nothing about the method that really requires it. Finally, this expression works fine even in overflowing unsigned integer arithmetic when N is a power of 2, but not for non-power-of-2 N.

What I like about this solution is that, once you see it written down, it’s fairly clear that and why it works (unlike many bit manipulation tricks), provided you know the rules of modular arithmetic: x + ((k - x) \bmod N) \equiv x + (k - x) = k \pmod N. We’re adding a non-negative value to x, so it’s clear that the result is ≥ x (provided there is no overflow). And we’re adding the smallest possible value we can to get to a value that’s congruent to k (mod N); I wrote about similar things before in my post “Intervals in modular arithmetic”.

There’s an equivalent expression for rounding down to the nearest value congruent to k (mod N): x - ((x - k) % N) that works (and is easy to prove) the same way.

It’s interesting to consider the case k=0. The round-down variant, x - (x % N), feels fairly natural and is something I’ve seen in “real-world” code more than once. The round-up variant, x + (-x % N) is something I’ve never seen anywhere. Once you throw the k in there, it all makes sense, but without it the expression looks quite odd.

Finally, here’s the aforementioned way to adapt the “regular” round-up formula to produce a value that’s congruent to k (instead of 0) mod N (and we’re back to requiring power-of-2 N here): ((x - k + N - 1) & ~(N - 1)) + k. This uses a different trick from the intervals in modular arithmetic paper: shift the origin around. In this case, we don’t have a formula for arbitrary k, but we do have a formula to round up to the nearest multiple of N. So we first subtract k; in this new shifted coordinate system, we want to round up to the next-larger multiple of N, which we know how to do. And finally, we add back k. It gets the job done, but it’s not as pretty as the other variant (in my opinion anyway), and it takes some thinking to convince yourself that it works at all.

How many x86 instructions are there?

It’s surprisingly hard to give a good answer (the question was raised in this article). It depends on how you count, and the details are interesting (to me anyway).

To not leave you hanging: Intel has an official x86 encoder/decoder library called XED. According to Intel’s XED, as of this writing, there are 1503 defined x86 instructions (“iclasses” in XED lingo), from AAA to XTEST (this includes AMD-specific extensions too, by the way). Straightforward, right?

Well, it depends on what you wanted to count. For example, as per XED, ADD and LOCK ADD are different “instruction classes”. Many assembly programmers would consider LOCK a prefix and LOCK ADD an addition with said prefix, not a distinct instruction, but XED disagrees. And in fact, for the purposes of execution, so do current x86s. An atomic add does very different things from a regular add. The prefix thing crops up elsewhere: is say MOVSD (copy a single 32-bit word) a different “instruction class” from REP MOVSD (block copy several 32-bit words)? XED says yes. But it doesn’t handle all prefixes this way in all contexts. For example, the operand-size prefix (0x66) turns integer instructions operating on 32-bit registers into the equivalent instruction operating on their lower 16-bit halves, but unlike with the REP or LOCK prefixes, XED does not count these as separate instruction classes. If you disagree about any of these choices, your count will come out different.

Mnemonics

It all depends on how precisely we define an instruction. Is it something with a distinct mnemonic? Let’s first look at what the article I quoted above says is by far the most common x86 instruction, at 33% of the total sample set: MOV. So let’s look up MOV in the Intel Architecture manuals. And… there are 3 different top-level entries? “MOV—Move”, “MOV—Move to/from Control registers”, “MOV—Move to/from Debug Registers”. The latter are sufficiently distinct from “regular” MOV to rate their own documentation pages, they have completely different instruction encodings (not even in the same encoding block as regular MOV), and they’re privileged instructions, meaning lowly user-mode code isn’t even allowed to execute them. Consequently they’re also extremely rare, and are likely to account for approximately 0% of the test sample. And, sure enough, XED counts them as separate instruction classes (MOV_CR and MOV_DR).

So these instructions may be called MOV, but they’re weird, special snowflakes, and from the processor’s point of view they’re entirely different instructions in a different part of the encoding space and with different rules. Calling them MOV is essentially nothing but syntactic sugar in the official Intel assembly language.

And on the subject of syntactic sugar: some mnemonics are just aliases. For example, SAL (shift arithmetic left) is a long-standing alias for SHL (shift left). Both are just bit shifts; there is no distinction between “arithmetic” and “logical” left shifts like there is between arithmetic and logical right shifts, but the Intel manuals list SAL (with an encoding that happens to be the same as SHL) and all x86 assemblers I’ve ever used accept it. Hilariously, in official Intel syntax, we’re simultaneously miscounting in the other direction, since at least two mnemonics got assigned twice: we already saw the “copy” variant of MOVSD (which has no explicit operands), but there’s also MOVSD as in “move scalar double” (which always has two explicit operands) which is an entirely different instruction (XED calls it MOVSD_XMM to disambiguate, and the same problem happens with CMPSD).

There’s also SSE compares like CMPSD (the two-operand one!) and CMPPS. XED counts these as one instruction each. But they have an 8-bit immediate constant byte that specifies what type of comparison to perform. But disassemblers usually won’t produce the hard-to-read CMPSD xmm0, xmm1, 2; they’ll disassemble that instruction as the pseudo-instruction CMPLESD (compare scalar doubles for lesser-than-or-equal) instead. So is CMPSD one instruction (just the base opcode with an immediate operand), is it 8 (for the 8 different standard compare modes), or something else?

This is getting messy. AT&T syntax to the rescue? Well, it solves some of our problems but also introduces new ones. For example, AT&T adds suffixes to the mnemonics to distinguish different operation widths. What Intel calls just ADD turns into ADDB (8-bit bytes), ADDW (16-bit “words”), ADDL (32-bit “long words”) and ADDQ (64-bit “quadwords”) in x86-64 AT&T syntax. Do we count these as separate? As per Intel syntax, no. As per XED instruction classes, also no. But maybe we consider these distinct enough to count separately after all? Or maybe we decide that if our definition depends on the choice of assembly syntax, of which there are several, then maybe it’s not a very natural one. What does the machine do?

Instruction bytes

Note I haven’t specified what part of the machine yet. This is thorny too. We’ll get there in a bit.

But first, instruction bytes. Let’s look at the aforementioned manual entry for real now: “MOV—Move”. If you check that page out in the current Intel Architecture Software Developer’s Manual, you’ll find it lists no less than thirty-four encodings (not all of them distinct; I’ll get to that). Some of these are more special, privileged operations with special encodings (namely, moves to and from segment registers). This time, XED doesn’t seem to consider segment register loads and stores to be special and lumps them into plain old MOV, but I consider them distinct, and the machine considers them distinct enough to give them a special opcode byte in the encoding that’s not used for anything else, so let’s call those distinct.

That leaves us with 30 “regular” moves. Which are… somewhat irregular: 10 of them are doing their own thing and involve moves between memory and different parts of the RAX (in 64-bit mode) register, all with a special absolute addressing mode (“moffs”) that shows up in these instructions and, to my knowledge, nowhere else. These instructions exist, and again, pretty much nothing uses them. They were useful on occasion in 16-bit mode but not anymore.

This specialness of the accumulator register is a recurring theme in x86. “op (AL/AX/EAX/RAX), something” has its own encoding (usually smaller) and various quirks for a lot of the instructions that go back to the 8086 days. So even though an asssembly programmer might consider say TEST ebx, 128 and TEST eax, 128 the same instruction (and the XED instruction class list agrees here!), these have different opcodes and different sizes. So a lot of things that look the same in an assembly listing are actually distinct for this fairly random reason. Keep that in mind. But back to our MOV!

The remaining 20 listed MOV variants fall into four distinct categories, each of which has 5 entries. These four categories are:

  • “Load-ish” – move from memory or another same-sized register to a 8/16/32/64-bit register.
  • “Store-ish” – move from a 8/16/32/64-bit register to either another register of the same size, or memory.
  • “Load-immediate-ish” – load an integer constant into a 8/16/32/64-bit register.
  • “Store-immediate-ish” – store an integer constant to either a 8/16/32/64-bit memory location, or a register.

All processor have some equivalent of the first three (the “store immediate” exists in some CPU architectures, but there’s also many that don’t have it). Load/store architectures generally have explicit load and store instructions (hence the name), and everyone has some way to load immediates (large immediate constants often require multiple instructions, but not on x86) and to move the content of one register to another. (Though the latter is not always a dedicated instruction.) So other than the fact that our “load-ish” and “store-ish” instructions also support “storing to” and “loading from” a register (in particular, there’s two distinct ways to encode register-register MOVs), this is not that remarkable. It does explain why MOVs are so common in x86 code: “load”, “store” and “load immediate” in particular are all very common instruction, and MOV subsumes all of them, so of course you see plenty of them.

Anyway, we have four operand sizes, and four categories. So why are there five listed encodings per category? Okay, so this is a bit awkward. x86-64 has 16 general-purpose registers. You can access them as 16 full 64-bit registers. For all 16 registers, you can read from (or write to) their low 32-bit halves. Writing to the low 32-bit half zero-extends (i.e. it sets the high half to zero). For all 16 register, you can read from (or write to) their low 16-bit quarter. Writing to the low 16-bit quarter of a register does not zero-extend; the remaining bits of the register are preserved, because that’s what 32-bit code used to do and AMD decided to preserve that behavior when they specced 64-bit x86 for some reason. And for all 16 registers, you can read from (or write to) their low 8-bit eighth (the lowest byte). Writing the low byte again preserves all the higher bytes, because that’s what 32-bit mode did. With me so far? Great. Because now is when it gets weird. In 16-bit and 32-bit mode, you can also access bits 8 through 15 of the A, B, C and D registers as AH, BH, CH and DH. And x86-64 mode still lets you do that! But due to a quirk of the encoding, that works only if there’s no REX prefix (which is the prefix that is used to extend the addressable register count from 8 to 16) on the instruction.

So x86-64 actually has a total of 20 addressable 8-bit registers, in 3 disjoint sets: AL through DL, which can be used in any encoding. AH through DH, which can only be accessed if no REX prefix is present on the instruction. And the low 8 bits of the remaining 12 registers, which can only be accessed if a REX prefix is present.

This quirk is why Intel lists all 8-bit variants twice: once without REX and one with REX, because they can access slightly different parts of the register space! Alright, but surely, other than that, we must have 4 different opcodes, right? One each for move byte, word, doubleword, quadword?

Nope. Of course not. In fact, in each of these categories, there are two different opcode bytes: one used for 8-bit accesses, and one for “larger than 8-bit”. This dates back to the 8086, which was a 16-bit machine: “8-bit” and “16-bit” was all the distinction needed. Then the 386 came along and needed a way to encode 32-bit destinations, and we got the already mentioned operand size prefix byte. In 32-bit mode (handwaving here, the details are a bit more complicated), the instructions that used to mean 16-bit now default to 32-bit, and getting actual 16-bit instrutions requires an operand size prefix. And I already mentioned that 64-bit mode added its own set of prefixes (REX), and this REX prefix is used to upgrade the now default-32-bit “word” instructions to 64-bit width.

So even though Intel lists 5 different encodings of the instructions in each group, all of which have somewhat different semantics, there’s only 2 opcodes each associated to them: “8-bit” or “not 8-bit”. The rest is handled via prefix bytes. And as we (now) know, there’s lots of different types of MOVs that do very different things, all of which fall under the same XED “instruction class”.

Maybe instruction classes is the wrong metric to use? XED has another, finer-grained thing called “iforms” that considers the different subtypes of instructions separately. For example, for the just-discussed MOV, we get this list:

  XED_IFORM_MOV_AL_MEMb=804, 
  XED_IFORM_MOV_GPR8_GPR8_88=805, 
  XED_IFORM_MOV_GPR8_GPR8_8A=806, 
  XED_IFORM_MOV_GPR8_IMMb_C6r0=807, 
  XED_IFORM_MOV_GPR8_IMMb_D0=808, 
  XED_IFORM_MOV_GPR8_MEMb=809, 
  XED_IFORM_MOV_GPRv_GPRv_89=810, 
  XED_IFORM_MOV_GPRv_GPRv_8B=811, 
  XED_IFORM_MOV_GPRv_IMMv=812, 
  XED_IFORM_MOV_GPRv_IMMz=813, 
  XED_IFORM_MOV_GPRv_MEMv=814, 
  XED_IFORM_MOV_GPRv_SEG=815, 
  XED_IFORM_MOV_MEMb_AL=816, 
  XED_IFORM_MOV_MEMb_GPR8=817, 
  XED_IFORM_MOV_MEMb_IMMb=818, 
  XED_IFORM_MOV_MEMv_GPRv=819, 
  XED_IFORM_MOV_MEMv_IMMz=820, 
  XED_IFORM_MOV_MEMv_OrAX=821, 
  XED_IFORM_MOV_MEMw_SEG=822, 
  XED_IFORM_MOV_OrAX_MEMv=823, 
  XED_IFORM_MOV_SEG_GPR16=824, 
  XED_IFORM_MOV_SEG_MEMw=825, 

As you can see, that list basically matches the way the instruction encoding works, where 8-bit anything is considered a separate instruction, but size overrides by way of prefixes are not. So that’s basically the rule for XED iforms: if it’s a separate instruction (or a separate encoding), it gets a new iform. But just modifying the size of an existing instruction (for example, widening MMX instructions to SSE, or changing the size of a MOV via prefix bytes) doesn’t.

So how many x86 instructions are there if we count distinct iforms as distinct? Turns out, an even 6000. Is that all of them? No. There are some undocumented instructions that XED doesn’t include (in addition to the several formerly undocumented instructions that Intel at some point just decided to make official). If you look at the Intel manuals, you’ll find the curious “UD2”, the defined “Undefined instruction” which is architecturally guaranteed to produce an “invalid opcode” exception. As the name suggests, it’s not the first of its kind. Its older colleague “UD1” half-exists, but not officially so. Since the semantics of UD1 are exactly the same as if it was never defined to begin with. Does a non-instruction that is non-defined and unofficially guaranteed to non-execute exactly as if it had never been in the instruction set to begin with count as an x86 instruction? For that matter, does UD2 itself, the defined undefined instruction, count as an instruction?

Instruction decoders

But back to those iforms: 6000 instructions, huh? And these must all be handled in the decoder? That must be terrible.

Well, no. Not really. I mean, it’s not pleasant, but it’s not the end of the world.

First off, let’s talk about how x86 is decoded in the first place: all x86 CPUs you’re likely to interact with can decode (and execute) multiple instructions per cycle. Think about what that means: we have an (aggressively!) variable-length encoding, and we’re continually fetching instructions. These chips can decode (given the right code) 4 instructions per clock cycle. How does that work? They’re variable-length! We may know where the first instruction we’re looking at in this cycle starts, but how does the CPU know where to start decoding the second, third, and fourth instructions? That’s straightforward when your instructions are fixed-size, but for x86 they are most certainly not. And we do need to decide this quickly (within a single cycle), because if we take longer, we don’t know where the last instruction in our current “bundle” ends, and we don’t know where to resume decoding in the next cycle!

You do not have enough time in a 4GHz clock cycle (all 0.25ns of it) to fully decode 4 x86 instructions. For that matter, you don’t even have close to enough time to “fully
decode” (what exactly that means is fuzzy, and I won’t try to make it precise here) one. Two basic ways to proceed: the first is simply, don’t do that! Try to avoid it at all cost. Keep extra predecoding information (such as marking the locations where instructions start) in your instruction cache, or keep a separate decoded cache altogether, like Intels uOp caches. This works, but it doesn’t help you the first time round when you’re running code that isn’t currently cached.

Which brings us to option two: deal with it. And the way to do it is pretty much brute force. Keep a queue of upcoming instruction bytes (this ties in with branch target prediction and other things). As long as there’s enough space in there, you just keep fetching another 16 (or whatever) instruction bytes and throw them into the queue.

Then, for every single byte position in that queue, you pretend that an x86 instruction starts at that byte, and determine how long it is. Just the length. No need to know what the instruction is. No need to know what the operands are, or where the bytes denoting these operands are stored, or whether it’s an invalid encoding, or if it’s a privileged instruction that we’re not allowed to execute. None of that matters at this stage. We just want to know “supposing that this is a valid instruction, what is it’s length?”. But if we add 16 bytes per cycle to the queue, we need 16 of these predecoders in parallel to make sure that we keep up and get an instruction length for every single possible starting location. We can pipeline these predecoders over multiple cycles if necessary; we just keep fetching ahead.

Once our queue is sufficiently full and we know that size estimate for every single location in it, then we decide where the instruction boundaries are. That’s the stage that keeps track. It grabs 16 queue entries (or whatever) starting at the location for the current instruction, and then it just needs to “switch through”. “First instruction says size starting from there is 5 bytes, okay; that means second instruction is at byte 5, and the queue entry says that one’s 3 bytes; okay, third instruction starts at byte 8, 6 bytes”. No computation in that stage, just “table lookups” in the small size table we just spent a few cycles computing.

That’s one way to do it. As said, very much brute force, but it works. However, if you need 16 predecoders (as you do to sustain a fetch rate of 16 bytes/cycle), then you really want these to be as dumb and simple as you can possibly get away with. These things most certainly don’t care about 6000 different iforms. They just squint at the instruction just enough to figure out the size, and leave the rest for later.

Luckily, if you look at the actual opcode map, you’ll see that this is not all that bad. There’s large groups of opcodes that all have basically the same size and operands, just with different operations – which we don’t care about at this stage at all.

And this kind of pattern exists pretty much everywhere. For example, look at that conspicuous, regular block of integer ALU instructions near the top of the opcode map. These all look (and work) pretty similar to the CPU. Most of them have essentially the same encodings (except for a few opcode bits that are different) and the same operand patterns. In fact, the decoder really doesn’t care whether it’s an OR, an ADD, a CMP, or a XOR. To an assembly-language programmer, a compiler, or a disassembler, these are very different instructions. To the CPU instruction decoder, these are all pretty much the same instruction: “ALU something-or-other mumble-mumble don’t care”. Which one of these gets performed will only be decided way later (and probably only after that operation make it to the ALU itself). What the decoder cares about is whether it’s an ALU instruction with an immediate operand, or if it has a memory operand, and what that memory operand looks like. And the instructions are conveniently organized in groups where the answers to these questions are always the same. With plenty of exceptions of course, because this is still x86, but evidently it can be made to work.

Further down the pipe

Instructions really don’t get decoded all at once, in one big “switch statement”, and after that they go to disjoint parts of the chip never to meet again. That’s not how these things are built. There’s plenty of similarity between different instructions, and the “understanding” of what an instruction does is distributed, not centralized.

For example, for the purposes of most of the instruction decoder, the SSE2 instructions ADDPS, SUBPS, MULSD and DIVPD are all pretty much the same thing. They’re FP ALU instructions, they accept the same types of operands, all of which are in the same place.

Some of these instructions are so similar that they’re almost certain to never fully get “decoded”. For example, for IEEE floats, a subtraction is literally just an addition where the sign bit of the second operand is flipped. If you look at the opcode table, the difference between the encoding for ADDPS and SUBPS is precisely one flipped bit: that bit is clear for ADDPS and set for SUBPS. Literally all you need to do to support both instructions is to decode them the same, make sure to grab that one bit from the instruction, and then feed it (along with the original second operand sign bit) into a single XOR gate in front of the FP adder. That’s it. You now support both floating point addition and subtraction.

Some of these differences matter more. For example, FP multiplies go to a different functional unit than FP adds, and they have a different latency. So the data needs a different routing, and the latency for say an add and a multiply is different, which the schedulers care about. If there’s a memory load involved, then the load unit needs to know what size of access, and what part of the operand bypass network to send the results to (integer, float/SIMD?). So there’s a bunch of control signals computed eventually that express the differences between all these instructions. But a lot of it happens really late. There’s certainly no big monolithic 6000-case “switch statement” anywhere.

And then there’s further differences still. For example, MOV elimination. Many x86s can in many cases avoid real execution of register-register MOVs altogether. They just resolve it as part of their register renaming. Likewise, zeroing a register by XORing it with itself (something the author of the original article I linked to) gets resolved by renaming that register to point to a hard-wired zero register and likewise doesn’t actually take any execution resources (even though it still needs to decode).

Where does that fit in our taxonomy? MOV rax, rbx will most often take 0 cycles, but sometimes take 1 cycle due to various reasons. Does the 0-cycle version, when it happens, count as a special instruction? Is XOR rax, rax (which goes down the magic implicit zeroing path and takes 0 cycles to execute) a different instruction from XOR rax, rcx which is encoded essentially the same way? These two instructions differ by exactly 1 bit in both the assembly-language source file and the assembled object code, yet execute in drastically different ways and with different latencies. Should that make them a separate instruction or not? The most useful answer really depends on what part of the pipeline you’re interested in. If you’re designing a CPU core, they pretty much are separate instructions. If you’re writing a disassembler, they are not.

In conclusion…

So, is there a point to all this? I wrote it because I think it’s fun, but is there something to learn here?

I think so. It makes a wonderful example for a general phenomenon I’ve encountered in a lot of different situations: questions to which a ballpark answer is fairly easy to give, but that keeps getting gnarlier the more you try to pin it down. It’s essentially an instance of the “coastline paradox“: the closer you look, the more detail you see, and the more the answer changes.

Suppose I ask you “where am I?”, and I’m okay with getting an answer that’s within about 10 meters or so. If you have a handheld GPS unit, you can just hand it to me, and if I look at the display I’ll get an answer. If I ask “where am I, down to the millimeter?”, things get a lot more complicated. Specifying the position of a person down to a meter or so makes sense, but specifying it down to a millimeter does not. Position of what exactly? My center of gravity? The position of my center of gravity, projected onto the ground? The position of the tip of my nose? The center point of the hangnail on my left pinky? You can’t answer that question precisely when the uncertainty inherent in the question is so much larger than the level of precision you’re aiming at.

And by the way, I used x86 as an example here, but don’t believe for a second the same thing doesn’t apply to, say, the ARM chip in your phone. Modern ARM chips support multiple encodings and also rank over 1000 instructions if you count them at the same level of granularity as XEDs “iforms”. In fact it’s pretty easy to get high instruction counts on any architecture as soon as any kind of vector/SIMD instruction set is involved, since most of them basically come in the form of “instantiate these 40 instructions for 10 different data types” (with lots of special magic that is either typeless only works on certain types, of course). And yeah, x86 has plenty of historical warts in its encoding, but so does ARM – many of them on display in the current generation of chips, where chip makers have the pleasurable task of designing 3 distinct instruction decoders: old-school 32-bit ARM or “A32”, the more compact but variable-size Thumb-2 or “T32”, and the fixed-size-again 64-bit “A64” encoding.