Skip to content

Fast blurs 2

August 1, 2012

This post is part of the series “Debris: Opening the box“.

At the end of the last post, I described the basic “moving average” implementation for box filters and how to build a simple blur out of it. I also noted that the approach given only offers very coarse control of the blur radius, and that we’d like to have something better than box filters. So let’s fix both of these issues.

Subpixel resolution

In the previous article, I used ‘r’ to denote what is effectively the “radius” of a box filter, and our filters always had an odd number of taps 2r+1. So r=0 has 1 tap (which is 1, so this is just the identity – no blur), r=1 has 3 taps, r=2 has 5 taps and so forth. But how do we get intermediate stages, say r=0.1 or r=1.5?

Well, Jim Blinn once said that “All problems in computer graphics can be solved with a matrix inversion”. But here’s the thing: Jim Blinn lied to you. Disregard Jim Blinn. Because at least 50% of all problems in computer graphics can in fact be solved using linear interpolation without any matrices whatsoever. And this is one of them. We know how to handle r=0, and we know how to handle r=1. Want something in the middle? Well, just linearly fade in another 1 and then normalize the whole thing so the weights still sum to 1. To implement this, let’s split our radius into an integer and fractional component first:

r = m + \alpha \quad \textrm{where}\ m \in \mathbb{Z}, 0 \le \alpha < 1

Then our blur kernel looks like this:

\frac{1}{2r + 1} \begin{bmatrix}\alpha & 1 & 1 & \cdots & 1 & 1 & \alpha \end{bmatrix}

with exactly 2m+1 ones in the middle. This is still cheap to implement – for example, we could just treat the whole thing as a box filter with r=m, and then add the two samples at the ends with weight α after the fact (and then apply our usual normalization). This is perfectly reasonable and brings us up to four samples per pixel instead of two, but it's still a constant cost independent of the length of the filter, so we're golden.

However there's a slightly different equivalent formulation that fits in a bit better with the spirit of the filter and is very suitable when texture filtering hardware is available: consider what happens when we move one pixel to the right. Let's look at our five-tap case again (without normalization to keep things simple) and take another look at the differences between adjacent output samples:

y(n) = \alpha \, x(n-2) + x(n-1) + x(n) + x(n+1) + \alpha \, x(n+2)
y(n+1) = \alpha \, x(n-1) + x(n) + x(n+1) + x(n+2) + \alpha \, x(n+3)

y(n+1) - y(n) \\ = ((1-\alpha) x(n+2) + \alpha \, x(n+3)) - ((1-\alpha) x(n-1) + \alpha \, x(n-2))

On the right side, we add a new pixel at the right end with weight α, and we need to "upgrade" our old rightmost pixel to weight 1 by adding it in again weighted with (1-α); we do the same on the left side to subtract out the old samples. Interestingly, if you look at what we're actually adding and subtracting, it's just linear interpolation between two adjacent samples – i.e. linear filtering (the 1D version of bilinear filtering), the kind of thing that texture sampling hardware is great at!

Time to update our pseudocode: (I'm writing this using floats for convenience, in practice you might want to use fixed point)

  // Convolve x with a box filter of "radius" r, ignoring boundary
  // conditions.
  float scale = 1.0f / (2.0f * r + 1.0f); // or use fixed point
  int m = (int) r; // integer part of radius
  float alpha = r - m; // fractional part

  // Compute sum at first pixel
  float sum = x[0];
  for (int i=0; i < m; i++)
    sum += x[-i] + x[i];
  sum += alpha * (x[-m] + x[m]);

  // Generate output pixel, then update running sum for next pixel.
  // lerp(t, a, b) = (1-t)*a + t*b = a + t * (b-a)
  for (int i=0; i < n; i++) {
    y[i] = sum * scale;
    sum += lerp(alpha, x[i+m+1], x[i+m+2]);
    sum -= lerp(alpha, x[i-m], x[i-m-1]);
  }

I think this is really pretty slick – especially when implemented on the GPU, where you can combine the linear interpolation and sampling steps into a single bilinear sample, for an inner loop with two texture samples and a tiny bit of arithmetic. However nice this may be, though, we’re still stuck with box filters – let’s fix that one too.

Unboxing

Okay, we know we can do box filters fast, but they’re crappy, and we also know how to do arbitrary filters, but that’s much slower. However, we can build better filters by using our box filters as a building block. In particular, we can apply our box filters multiple times. Now, to give you an understanding of what’s going on, I’m going to switch to the continuous domain for a bit, because that gets rid of a few discretization artifacts like the in-between box filters introduced above (which make analysis trickier), and it’s also nicer for drawing plots. As in the introduction, I’m not going to give you a proper definition or theory of convolution (neither discrete nor continuous); as usual, you can step by Wikipedia if you’re interested in details. What we need to know here are just two things: first, there is a continuous version of the discrete convolution we’ve been using so far that behaves essentially the same way (except now, we perform integration instead of the finite summation we had before), and second, convolution is associative, i.e. (f * g) * h = f * (g * h). In our case, we’re gonna convolve our image I with a box filter b multiple times, and for this case we get something like b * b * I = (b * b) * I. Point being, applying a box filter p times is equivalent to filtering the image once with a filter that’s a box convolved p-1 times with itself.

In other words, using just our single lowly box filter, we can build fancier filters just by applying it multiple times. So what filters do we get? Let’s plot the first few iterations, starting from a single box filter with radius 1. Note that in the continuous realm, there really is only one box filter – because we’re not on a discrete grid, we can generate other widths just by scaling the “time” value we pass into our filter function. This is one of the ways that looking at continuous filters makes our life easier here. Anyway, here’s the pretty pictures:

Box filter (p=1)


Triangle filter (p=2)


Piecewise quadratic filter (p=3)


Piecewise cubic filter (p=4)

These all show the filter in question in red, plotted over a Gaussian with matching variance (\sigma^2 = p/12) and scale in blue (ahem, not to suggest any conclusions or anything!). Triangle filters also have a name (which is just as inspired as “box filter”, natch), the other ones I’ve labeled simply by what type of function they correspond to. Anyway, the point to take away here is that convolving a box with itself quickly leads to pretty smooth functions (the Cardinal B-splines, in fact) that happen to get quite similar to a Gaussian very fast. In fact, as we keep increasing p (the order), they will converge towards a “real” Gaussian – proof here for the curious. (You can also stay with the discrete version and prove this using the Central Limit Theorem, but I like this version more). The piecewise cubic function you get for p=4 above is already within 3% (absolute) error of the corresponding Gaussian. In practice, exact match with a Gaussian isn’t that important anyway as long as you’re just blurring images, and just using p=3 is normally fine.

So, to get better filters, just blur multiple times. Easy! So easy in fact that I’m not gonna bore you with another piece of pseudocode – I trust you can imagine a “do this p times” outer loop on your own. Now, to be fair, the convolutions I showed you were done in the continuous domain, and none of them actually prove anything about the modified box filters we’re using. To their defense, we can at least argue that our modified box filter definition is reasonable in that it’s the discretization of the continuous box filter with radius r to a pixel grid, where the pixels themselves have a rectangular shape (this is a fairly good model of current displays). But if we actually wanted to prove consistency and convergence using our filters, we’d need to prove that our discrete version is a good approximation of the continuous version. I’ll skip that here since it gets technical and doesn’t really add anything if you just want fast blurs, but if you want to see proof (and also get an exact expression for the variance of our iterated modified box filters), this paper has the answers you seek.

Implementation notes

And that’s really all you need to know to get nice, fast quasi-Gaussian blurs, no matter how wide the kernel. For a GPU implementation, you should be able to more or less directly take my pseudo-code above and turn it into a Compute Shader: for the horizontal blur passes, have each thread in a group work on a different scan line (and for vertical passes, have each thread work on a different row). To get higher orders, you just keep iterating the passes, ping-ponging between destination render targets (or UAVs if you want to stick with DX11 terminology). Important caveat: I haven’t actually tried this (yet) and I don’t know how well it performs; you might need to whack the code a bit to actually get good performance. As usual with GPUs, once you have the basic algorithm you’re at best half-done; the rest is figuring out the right way to package and partition the data.

I can, however, tell you how to implement this nicely on CPUs. Computation-wise, we’ll stick exactly with the algorithm above, but we can eke out big wins by doing things in the right order to maximize cache hit ratios.

Let’s look at the horizontal passes first. These nicely walk the image left to right and access memory sequentially. Very nice locality of reference, very good for memory prefetchers. There’s not much you can do wrong here – however, one important optimization is to perform the multiple filter passes per scanline (or per group of scanlines) and not filter the whole image multiple times. That is, you want your code to do this:

for (int y=0; y < height; y++) {
  for (int pass=0; pass < num_passes; pass++) {
    blur_scanline(y, radius);
  }
}

and not

// Do *NOT* do it like this!
for (int pass=0; pass < num_passes; pass++) {
  for (int y=0; y < height; y++) {
    blur_scanline(y, radius);
  }
}

This is because a single scan line might fit inside your L1 data cache, and very likely fits fully inside your L2 cache. Unless your image is small, the whole image won’t. So in the second version, by the point you get back to y=0 for the second pass, all of the image data has likely dropped out of the L1 and L2 caches and needs to be fetched again from memory (or the L3 cache if you’re lucky). That’s a completely unnecessary waste of time and memory bandwidth – so avoid this.

Second, vertical passes. If you implement them naively, they will likely be significantly slower (sometimes by an order of magnitude!) than the horizontal passes. The problem is that stepping through images vertically is not making good use of the cache: a cache line these days is usually 64 bytes (even 128 on some platforms), while a single pixel usually takes somewhere between 4 and 16 bytes (depending on the pixel format). So we keep fetching cache lines from memory (or lower cache levels) and only looking at between 1/16th and 1/4th of the data we get back! And since the cache operates at cache line granularity, that means we get an effective cache size of somewhere between a quarter and a sixteenth of the actual cache size (this goes both for L1 and L2). Ouch! And same as above with the multiple passes, if we do this, then it becomes a lot more likely that by the time we come back to any given cache line (once we’re finished with our current column of the image and proceeded to the next one), a lot of the image has dropped out of the closest cache levels.

There’s multiple ways to solve this problem. You can write your vertical blur pass so that it always works on multiple columns at a time – enough to always be reading (and writing) full cache lines. This works, but it means that the horizontal and vertical blur code are really two completely different loops, and on most architectures you’re likely to run out of registers when doing this, which also makes things slower.

A better approach (in my opinion anyway) is to have only one blur loop – let’s say the horizontal one. Now, to do a vertical blur pass, we first read a narrow stripe of N columns and write it out, transposed (i.e. rows and columns interchanged), into a scratch buffer. N is chosen so that we’re consuming full cache lines at a time. Then we run our multiple horizontal blur passes on the N scan lines in our scratch buffer, and finally transpose again as we’re writing out the stripe to the image. The upshot is that we only have one blur loop that gets used for both directions (and only has to be optimized once); the complexity gets shifted into the read and write loops, which are glorified copy loops (okay, with a transpose inside them) that are much easier to get right. And the scratch buffer is small enough to stay in the cache (maybe not L1 for large images, but probably L2) all the time.

However, we still have two separate copy loops, and now there’s this weird asymmetry between horizontal and vertical blurs. Can’t we do better? Let’s look at the sequence of steps that we effectively perform:

  1. Horizontal blur pass.
  2. Transpose.
  3. Horizontal blur pass.
  4. Transpose.

Currently, our horizontal blur does step 1, and our vertical blur does steps 2-4. But as should be obvious when you write it this way, it’s actually much more natural to write a function that does steps 1 and 2 (which are the same as steps 3 and 4, just potentially using a different blur radius). So instead of writing one copy loop that transposes from the input to a scratch buffer, and one copy loop that transposes from a scratch buffer to the output, just have the horizontal blur always write to the scratch buffer, and only ever use the second type of copy loop (scratch buffer to output with transpose).

And that’s it! For reference, a version of the Werkkzeug3 Blur written using SSE2 intrinsics (and adapted from an earlier MMX-only version) is here – the function is GenBitmap::Blur. This one uses fixed-point arithmetic throughout, which gets a bit awkward in places, because we have to work on 32-bit numbers using the MMX set of arithmetic operations which doesn’t include any 32-bit multiplies. It also performs blurring and transpose in two separate steps, making a pass over the full image each time, for no good reason whatsoever. :) If there’s interest, I can write a more detailed explanation of exactly how that code works in a separate post; for now, I think I’ve covered enough material. Take care, and remember to blur responsibly!

From → Coding

29 Comments
  1. Aren’t you totally changing the transposition behavior when you go to combined-blue-tranpose? It seems like you suddenly now need a second buffer.

    That is, in the original, step 2 is: transpose a narrow, tall column band to a wide, short row band. Step 4 is: transpose a wide, short row band to a narrow, tall column band. Under this scheme, you only need the scratch buffer, because you’re doing a loop over scratch-buffer-sized objects from step 2-4.

    In the sequel, it appears you’re proposing to always loop over scratch-buffer-sized chunks for 1-2 and 3-4, and 2 and 4 are both ‘tranpose a wide, short row band to a narrow, tall column’. But that means that the first step 2 needs to be write a narrow tall column band of the image, despite having only processed a wide, short row band, which means it can’t write it back to the original buffer.

    • Yeah, I wrote the last part a bit sloppily, sorry for that. Bundling horizontal blur and transpose-in-output does indeed cost you the ability to do the processing in place, and if you’re memory-constrained, you might want to opt for one of the other variants.

  2. Also, I’ve always wondered whether for the non-subpixel multipass blurs it would be worth doing it all in a single pass.

    That is, a single step of a single blur reads two samples (at each end) to update its accumulator compute the blurred value at a single pixel.

    If we consider the second step of a two-pass blur, those two samples it’s reading are themselves the result of a single blur pass. But it’s reading through them in a consistent streaming order. So instead of computing that as a separate blur pass, we could compute those values on the fly. Each of the two samples would be computed by reading two samples from the underlying image, so it would require 4 memory reads and three blur calculations (actually two of the memory operations read the same address). The main question is how it behaves cachewise. (It also maybe complicates the startup and shutdown phases if you’re not reading from a clamped texture so you’re trying to optimize the clamping out of the loop.)

    Three passes naively requires reading 8 samples, but 4 in practice. But it does require seven blur calculations (versus three if multipass).

    Things explode too exponentially fast with subpixel precision, since each pass needs to read four samples.

    • I kept the articles almost completely DSP-free since everything in there has a nice, obvious interpretation anyway, and I didn’t want to complicate the exposition unnecessarily. But if you want to explore “fused multi-pass” variants, your better off just using the Z-transform (Generating Functions) and looking at the ways you can factor the transfer function.

      Our basic box filter is H(z) = z^(-n) + z^(-n+1) + ... + z^(n-1) + z^n (plus normalization, but that’s all linear so ignore this for now). Using the standard geometric summation formula, this is equal to H(z) = (z^(n+1) – z^(-n)) / (z – 1). Presto, that’s the moving average derivation in a single line. :) Multiple passes are just powers of H(z), and multiplying it out gives you a “fused” way to compute multiple passes – using a very rare beast, a n-pole FIR filter.

      H^2(z) = (z^(2n+2) - 2z + z^(-2n)) / (z - 1)^2, so we only get 3 reads (as you mention) but a 2-pole feedback stage (need to keep track of 2 previous output samples). That’s still totally reasonable.
      H^3(z) = (z^(3n+3) - 3z^(n+2) + 3z^(1-n) - z^(-3n)) / (z - 1)^3. Reads 4 samples, has 3 poles. And so forth – you get the idea. The point being that all of this is still completely reasonable if you don’t mind the extra arithmetic on the output end. Haven’t looked at the subpixel case though; yes, things get messy there.

  3. I like the formulation of the incremental non-integer-radius blur!

    So one issue is the precision of intermediates. If your pixels are U8, if you do one direction of blur and stuff it back into U8’s , and then do another direction of blur, you’re losing precision in the intermediate. (no big deal in a demo context, but not nice as a general way to do filters)

    It’s almost as fast to keep a few rows of higher precision, whatever the width of the filter kernel is, of U16 pixels. Then you grab U8 horizontal, blur and shift left a few steps to get some fractional precision, write the output to the U16 row buffers. Each time you step a scan line you apply the vertical kernel to the row buffers and write out a full row. The row buffer is circular. As long as the cache is big enough to fit the entire row buffers, this is very fast. Actually this can be much faster when memory bandwidth is your limitation, but it makes the horizontal and vertical code totally different.

    The other thing is, I think it’s faster to read-vertical and write horizontal, rather than read horizontal and write vertical. The reason is if you write horizontal you are always writing full cache lines, so you don’t have to fetch a line and just change one byte and flush it out again. The difference is pretty small I guess because the vertical cache misses will always dominate.

    • As mentioned in the first part, our texture generator keeps things at 15 bit/channel internally despite only ever outputting ARGB8888 images. The “in a demo context” bit here is misleading – with fully generated textures it’s not uncommon to have paths with 50+ image manipulation/filtering and several color correction operations in sequence, which will absolutely kill your precision at 8 bits per channel no matter how careful you are about rounding. Within a filter (e.g. the blur) you can keep extra precision, for the temporaries and round (or even dither) on output, but if you do a dozen or more dependent steps in sequence that won’t save you. With the 15 bits/channel, you do have headroom for this kind of thing, and you can afford to not have extra precision in the middle stages, which makes things simpler.

      One thing I did not mention in the article is that for fixed-point linear interpolation on the input side, you want to not down-scale before you add into the filter sum; instead keep all the fractional bits. Again, don’t throw away precision unless you have to! There’s a balancing act between max size of blur kernels and bits in your texture; with the 15 bits/channel and 6 bits “subtexel” accuracy the filter uses, you get a max kernel width of 1024 pixels before you start overflowing into the sign bit in a 32-bit accumulator. That’s plenty to be sure, but still, it’s a noticeable limit.

      “The other thing is, I think it’s faster to read-vertical and write horizontal, rather than read horizontal and write vertical. The reason is if you write horizontal you are always writing full cache lines”
      I did mention explicitly that you should write stripes of rows, not single scan lines – and specifically, size the stripe so you always write full cache lines at a time. That accomplished pretty much the same thing.

      Another thing to be careful about is that with power-of-2 texture sizes, it’s really easy to screw yourself by hitting cache set aliasing issues. It can make a dramatic difference to just allocate 64 extra bytes per line just to make sure you hit all sets evenly.

      • cbloom permalink

        Indeed! Quite hypocritical, since one of my greatest pet peeves is boners who leave “but what about this!” comments when I’ve already addressed that topic.

        Presumably the ideal way to do multiple effects on an image is not to run multiple passes over the image but to run multiple passes over the scratch buffer (as long as the effects are H-V separable like blur).

      • Separability is not a very useful criterion here, few things besides straight convolution filters actually care.

        We did have a version for mobile devices years ago (2004/2005, long before they had GPUs worth the name) that distinguished between three types of filters:
        – 1:1 pixel transforms (this covers basically all color correction and compositing)
        – Wants full image input but can produce output tile by tile (most operations that perform any kind of geometric transform, most generators)
        – Wants full image, writes full image (this is rare; the first pass of the separable moving-average blur is an example)

        The textures would be stored and, generally, processed tile by tile (16×16 pixels at up to 8 bytes/pixel if my memory serves me right), and most of the 1:1 transforms would live directly off the output of their predecessor without any of it ever being written to memory. The goal was for most processing to happen purely in the cache, to save on memory bandwidth (this was aiming for 16-32k of L1 data cache with no L2, hence the small tiles).

        It was a cute idea and worked reasonably well, but we never seriously pursued this on PC which had faster memory so we were more compute limited. More importantly, all our PC tex gen work was shooting for small size, and while the tile stuff wasn’t bad, it did need a fair bit more infrastructure than the dead simple “just call functions on images” model we used otherwise.

  4. I didn’t get how do you link between the STD of the Gaussian Filter and the parameters of the Box Blur.

    Let’s say I want to replicate GB with STD 5.

    Which length of box filter should I chose?
    How many iterations?

    • As I say in the post, “if you want to see proof (and also get an exact expression for the variance of our iterated modified box filters), this paper has the answers you seek.”

      Equation (14) is the exact variance. However, the exact variance interpolates the much simpler expression (7) at integer points: for a d-times iterated box filter with exact integer width L, we have sigma^2=d*(L^2 – 1)/12. You can invert that to get a decent starting value (d you would just fix in advance). If you want a more exact solution, you can use the algorithm in Fig. 2 of the paper; in practice, I’ve not used this approximation in any case where more accuracy is necessary – inverting the expression for integer box filters yields a smooth control that’s easy to manipulate and good enough for UI, and in cases where you really care about whether your filter more closely matches a Gaussian with variance 3.1 or 3.15, you might not want to use an approximation in the first place.

      • Hi,
        Actually I’m after replicating Photoshop’s Gaussian Blur which uses Box Blur for the approximation.

        They use a combination of few lengths to achieve it and I thought your method would be accurate to replicate their method.

        It seems this link is limited to single length filter.
        Though it probably can be broken as a sum of few (Jus like the variance of a sum of independent R.V.).

      • I don’t know what Photoshop does. The method I’m describing uses iterated box filters of “fractional” widths (what the paper calls “extended box filters”). This is a more accurate approximation to a real Gaussian than mixing several integer box widths.

      • Royi Avital permalink

        Hi,
        Let me try explain it with an example.
        Let’s say I want to approximate a Gaussian Blur with STD of 15.4.
        I want to do it in 6 iterations.
        I’m willing to use Extended Box Blurs to gain extra accuracy.

        Now,
        Which length of the Box Filter should I chose?
        Is it as simple as floor(VAR / 6) -> l.
        And the rest of the needed variance to do as an Extended Box Blur?

      • Why is there a floor in there? The whole point of the extended box filter is that you get to use fractional widths!

        sigma^2 = d(L^2 – 1) / 12 is a continuous expression that we know is exact at integer values of L. We can just solve for L to get a continuous mapping from sigma to L (which we know is exact when L works out to be integer, but might be an approximation inbetween):

        L = sqrt(12*sigma^2 / d + 1)

        That is the overall width of the filter, with L=1 meaning “no change”. The code in the example is parameterized in terms of the blur radius r; r = (L-1)/2.

        If you need a more exact expression than that, use the algorithm from the paper!

  5. Let’s talk Classic Box Blur where you can add one scalar and subtract one scale to have O(m) operation.

    When you blur along a row (Scan line) how do you take advantage of SSE?
    Because you load 4-8 Pixels yet you can not jump by 4-8 pixels and use this O(m) formula.

    There must be tricks I’m not familiar with.

    Thank You.

    • For a 2D image, you can do vertical blurs by working on horizontal groups of pixels and horizontal blurs by working on vertical groups of pixels. If you’re gonna do a 2D blur, the easiest thing to do is write a single vertical-blur-then-transpose pass and run it twice. If you only want to blur horizontally, you still need to do two transposes though (one on the input and one on the output side), so this technique is less attractive for 1D horizontal than in other cases.

      • Royi Avital permalink

        I think I explained my self poorly.

        I totally agree with your “Tip” about working along rows (Using Transpose between iterations).

        What I’m missing is when one work on a row the loop is simple:

        1. Add new pixel value (Location + Radius).
        2. Remove old pixel value (Location – Radius – 1).
        3. Move location to new pixel and return to 1.

        When doing SSE you load 4 pixels.
        The trick is no more valid.

        You can do some math to fix it, I was wondering what’s the right way doing it in SSE (Keep it O(n)).

        Thank You.

      • Read my reply again.

        The solution is to have your SIMD vectors be orthogonal to your blur direction. So when you’re loading say a horizontal group of 4 pixels, that’s in an iteration where you’re blurring the *columns*. Where the scalar approach works on one column at a time, the SIMD solution does (say) 4 at once.

        Likewise, for a horizontal loop (e.g. in increasing x direction), the samples you have to load are a *vertical* group: (x,y), (x,y+1), (x,y+2) and (x,y+3).

      • Royi Avital permalink

        Hi,

        This is one of the solution I though about.
        But it raised 2 issues:

        1. Wouldn’t this memory pattern access (Pixel aren’t contiguous in memory) kill performance?
        2. The load operations of SSE requires the pixels to be contiguous in memory. One could do it with “_mm_set…” yet again, wouldn’t that be so inefficient it would kill performance?

        I thought maybe there is a trick for the case working on one row with SEE.

        I appreciate you share your vast knowledge.

        Thank You.

  6. Again: this is why I said you should be blurring *vertically* so you can load 4 adjacent pixels in memory, then transpose the image (you can fold that into the output of the previous pass), then do another vertical blur+transpose pass.

    • Royi Avital permalink

      Hi,

      You can assure I understood what you suggested.
      I thought about doing it as well.

      The issue is it means each iteration jumps the width of the image which means bad memory pattern for the prefetchers.

      Am I missing something?

      Thank You.

  7. Strided linear access is still linear. You probably want to process enough columns at the same time to fill up a cache line, which makes the whole thing a bit wider, but that’s it.

  8. Royi Avital permalink

    I see.
    I’m not experienced with those kind of things.

    I thought the Prefetchers wouldn’t like this kind of memory access (Constant Stride yet different than 1 / 4).

    I guess now the question is how to efficiently unpack SSE variable into Array in Transpose.
    Currently what I do is store into 4 Elements array and the unpack this with a loop.

    Any thought on this?

    Thank You.

      • Royi Avital permalink

        Hi,
        I tried both methods.

        It seems that filtering along the row even with the price of using unaligned memory loading for the SSE register is up 4 times faster.

        I have an idea how both load data in aligned manner and work on rows.

        Anyhow, the pattern to load data aligned and filter along the columns is for some reason much slower.

        If someone interested, I will be happy to share the code.

  9. james permalink

    Thank you for this very helpful article. I have written a compute shader based on this, and came up with one optimisation. In the moving average loop, if you keep the decimal sum separate, you can simply discard it each iteration, and add it again (2 texture samples). This avoids you having to sample to minus the decimal weight, which happens twice per iteration. So overall I was able to save 2 samples per iteration. When writing the resulting pixel, there is slightly more calculation as you need to multiply the decimal sum with the decimal weight, and then add it to the actual weight.

  10. Jerry permalink

    For those that land here due to google searches. Here is the first widely published paper from Paul Heckbert. He leveraged Perlins ideas shared in a Siggraph course. It also points out the bit about interpolating two taps to provide fractional extents. https://www.researchgate.net/publication/220721661_Filtering_by_repeated_integration

  11. Jerry permalink

    Also check out Paul’s earlier paper, Fun with Gaussians… https://www.researchgate.net/publication/2313072_Fun_With_Gaussians

Trackbacks & Pingbacks

  1. Debris: Opening the box « The ryg blog

Leave a comment