# Square waves, or non-elephant biology

### How to analyze non-sine waveforms if all the electronic theory deals in sines?

In some of the earlier articles on this blog, I touched on the subject of digital square wave signals and their interactions with the analog realm. In an effort to keep the texts accessible to hobbyists, I did my best to stay clear of modeling the behavior of the circuits in the frequency domain; instead, I simply offered some rules of thumb for different signal speeds. Alas, in doing so, I managed to draw the ire of one or two electrical engineers on Hacker News.

The reason I shied away from this analytical tool is that all the intuitive, hobbyist-friendly rules for analyzing the frequency response of electronic circuits date back to the early days of radio and work only for sine waves. A sine-shaped signal that travels through a capacitor or an inductor emerges on the other end as sine, with its amplitude and phase offset according to a simple formula. The case of a square wave is a lot more complex and notionally calls for advanced calculus. In effect, we have divided the field into elephant and non-elephant biology — and heck, the elephant stuff is quite easy to grasp!

Still, there’s one invaluable and accessible trick for dealing with non-elephant square waves: as it turns out, they can be approximated as a sum of a sine wave at the fundamental frequency, plus sine waves at that frequency’s odd multiples. Specifically, the time-domain formula is:

Or, in a more straightforward notation:

Although the sum is notionally infinite, the cool part is that the sequence quickly converges to the expected waveform. The following plots show the first couple of approximations for sum lengths of 1, 2, 6, and 20:

If you have gnuplot installed, you can experiment with this convergence using the following set of commands:

`set samples 2000`**
**odd_h(x, n) = sin(x * (2*n - 1)) / (2*n - 1)
plot sum [n=1:20] 4/pi * odd_h(x, n)

Replace “20” in the last line with the number of odd frequency multiples you wish to sum.

As should be evident from the illustration, by the time you get to n = 6 (fundamental frequency times 11), the waveform looks close enough to the real deal for almost all intents and purposes. The inclusion of further multiples (“harmonics”) offers diminishing returns as a consequence of the increasing divisor in the underlying expression — although it slightly increases the slope of rising and falling edges. It also temporally compresses the stubborn ringing artifacts near the discontinuities, a pattern known as the Gibbs phenomenon.

You can actually observe this sine-square wave duality in real world. The fast Fourier transform (FFT) is an important mathematical tool that deconstructs complex signals into a spectral view of sine wave frequencies. Using an oscilloscope to apply FFT to a 1 MHz square wave yields the following plot:

The vertical scale of the spectrum plot is logarithmic; the gradations are 6 dB apart, which is a needlessly complicated way for electrical engineers to say that each line corresponds to a 2x change in signal amplitude. You can clearly see that the fundamental frequency is followed by a series of peaks at odd harmonics (3x, 5x, 7x, etc). The second peak has an amplitude of about 1/3rd of the fundamental frequency (a difference of 9.6 dB); the third one is about 1/5th; and so forth. The sixth spike at 11 MHz has an amplitude of just about 9% of the first one (-20.8 dB). This is precisely as foretold by the formula.

The ability to deconstruct square waves into a finite (and short!) sum of sine harmonics is quite useful. For example, the approach allows you to intuitively explore the behavior of lowpass, highpass, or bandpass filters that attenuate and phase-shift some of these harmonics depending on their frequency. All you have to do is compute a new sum of a handful of appropriately-adjusted constituent waveforms.

With this trick up our sleeve, we can also revisit the topic of decoupling capacitors from the earlier article. We now know that a 10 MHz square wave with brisk (sub-10%) rise and fall times can be roughly modeled as a series of significant sine harmonics extending to about 110 MHz. This explains the difficulty of fully suppressing digital switching noise with a decoupling capacitor: as discussed earlier, in the vicinity of 100 MHz, parasitic inductive characteristics of low-cost MLCCs become quite pronounced and significantly limit the current such a capacitor can source. By the time you get to 200 MHz, the components do very little decoupling at all.

At this point, we can also address the Hacker News quip that started it all: megahertz speeds don’t tell the whole story for digital buses. On one hand, some of the fastest binary signals might no longer look like square waves: signal rise and fall times become so substantial in proportion to pulse length that the oscilloscope trace looks more like a sine wave. In such situations, fewer (if any) harmonics might need to be considered to faithfully model a particular stream of zeroes and ones. But the opposite is also true: a high-performance CPU that pumps out signals with picosecond-range rise and fall times can cause substantial noise that extends past the 11th harmonic of the interface’s nominal data rate.

In other words, while there are some general rules of thumb to follow, it is sometimes more instructive to fine-tune your circuit design based on the expected or measured edge slope of digital signals, not just their clock frequency.

*For more articles about analog and digital electronics, please visit this index page.*