Sierpiński triangle? In my bitwise AND?
Exploring a peculiar bit-twiddling hack at the intersection of 1980s geek sensibilities.
I’m an ethusiastic promoter of the C language. One of the outmoded cultural phenomena associated with C are bit-twiddling hacks: a collection of brain-teasers that implement improbably complex algorithms using bitwise arithmetic. They are often developed under the guise of code optimization, but they mostly serve to impress your friends and confuse enemies.
I have also previously written about fractals; they’re pieces of mathematical curiosa that enjoyed a near-mythical status in 1980s, but are no longer talked about in polite company. One of the better-known fractals is the Sierpiński triangle. It is constructed by taking an ordinary (filled) triangle and then successively removing middle one-fourths from what’s left:
The fractal has some interesting properties; most notably, the surface area decreases by 25% in each iteration while the perimeter increases by 50%, so the respective limits are 0 and ∞. This is despite the figure retaining the same overall footprint as the starting triangle.
Anyway — there is this astonishingly simple bit-twiddling hack that somehow produces the Sierpiński triangle (demo):
#include <stdint.h> #include <stdio.h> #define LEN (1 << 6) int main() { for (uint32_t y = 0; y < LEN; y++) { for (uint32_t x = 0; x < LEN; x++) printf((x & y) ? " " : "MM"); putchar('\n'); } }
In essence, we iterate over a pair of integer coordinates, x and y, and then color each cell depending on whether bitwise x & y is zero or non-zero. That’s it! For a pair of counters running from 0 to 63, the result is this:
Increasing the range of the counters adds more detail, producing a finer-grained fractal with more and more nested triangles. But… why?
A hand-wavy explanation is that the bit-twiddling part is mostly a red herring. There’s nothing clever about bitwise AND; the magic is the positional numeral system! If we visualize the process of counting from 0 to 63 in binary, we get the following bit pattern:
The value of the least significant bit is toggling with every tick, the next bit is flipping back and forth at half the frequency, and so on. This can be thought as a fractal pattern in itself: as we increase the size of the counter, the visualization acquires more and more self-similar detail, with no hard limit in sight. In fact, if you squint your eyes, the pattern does look a bit like a succession of somewhat squished (log-scale) triangles.
For a more precise tie-in to the Sierpiński shape, we do need to peek under the hood of the x & y operation. We’re calculating a bitwise AND of two counters, but what is the result of that?
Well, it’s sufficient for any bit of x & y to be set to satisfy the condition in the program, so let’s start by looking at the MSB. If you’re used to binary numbers, it should be obvious that in our 6-bit (64-value) case, the most significant bit of the x coordinate will be zero for all values less than 32, and one for all values equal to or above that threshold:
This divides the x axis into two halves; the same is true for the y coordinate, so we end up with four quadrants in the two-dimensional plot. Only in the bottom right quadrant (32x32 cells), both of the MSBs are set:
In other words, plotting the MSB of x & y nets us the following:
Next, let’s have a look at the second most significant bit (#4). The value cycles at twice the frequency of the MSB, wrapping around in the middle. The same is happening in the y axis. In effect, we get the same pattern as before, but tiled twice in each axis:
In other words, the operation sets the bottom right sub-quadrants of every original quadrant.
The following visualization shows the regions where bit #4 of x & y is set. These regions are superimposed on top of the previously-calculated layout for the MSB:
There are no surprises with bit #3. The pattern wraps around four times in each axis, lighting up the following regular grid of blocks:
Again, we can express it as a divide-and-set operation running at a finer scale and toggling the bottom right sub-sub-quadrants:
The same goes for bit 2, wrapping around eight times and adding a regular pattern of 64 blocks:
After two more steps, we end up with the following result; the dark blue cells are coordinates that were never set in any of the passes, netting the familiar triangle:
In effect, the algorithm is an iterative block-removal approach in disguise; it just doesn’t look that way because the passes are “parallelized” by the arithmetic logic unit of a CPU — at least up to the maximum hardware-supported bit width of an integer.
I write well-researched, original articles about geek culture, electronic circuit design, and more. If you like the content, please subscribe. It’s increasingly difficult to stay in touch with readers via social media; my typical post on X is shown to less than 5% of my followers and gets a ~0.2% clickthrough rate.
This also ties in to the bit-twiddling construction method of the Hadamard matrix, which is important to the construction of the (very unimportant) square-wave frequency domains:
https://lcamtuf.substack.com/p/is-the-frequency-domain-a-real-place
To generate the Hadamard matrix, we count the number of bits set in x & y and color a cell depending on whether the count is even or odd.
This might sound puzzling, but the conceptual model is that for the Sierpiński triangle, we're ORing the bottom-right-corner bit-set patterns; and for Hadamard, we're XORing them.
Recently, I finally got around to a Python implementation of an elementary cellular automaton, because I wanted to explore the infamous "Rule 110". I was surprised to find that many of the rules generate Sierpiński triangles, both as equilateral triangles and right-angle triangle such as the bit trick generates. If interested, here's my catalog of the outputs of all 256 rules:
https://substack.com/profile/195807185-wyrd-smythe/note/c-116279035