Square waves, or non-elephant biology
How to think about non-sine waveforms if all the electronic theory deals in sines?
For newcomers to the field of electronics, it can be dumbfounding that virtually all the intuitive, hobbyist-friendly rules for analyzing the response of electronic circuits work only for sine waves. Although sine waves often crop up in nature, the associated math doesn’t seem to be directly applicable to, say, square waves that are ubiquitous in modern digital circuitry.
Sine waveforms have some nice mathematical properties: for example, when they travel through capacitors or inductors, they emerge on the other end looking essentially the same, with their amplitude and phase adjusted according to simple formulas. The case of other waveforms is more complex and notionally calls for serious calculus.
Still, the reliance on sine-wave math feels like a cop-out. In effect, we have divided the field into elephant and non-elephant biology — and we’re really proud that the elephant stuff is quite easy to grasp.
Luckily, there’s one invaluable and accessible trick for dealing with non-elephant square waves: as it turns out, they can be approximated as a sum of a sine wave at the fundamental frequency, plus sine waves at that frequency’s odd multiples. Specifically, the time-domain formula is:
Alternatively, a more straightforward notation is:
Although the sum is notionally infinite, the cool part is that the sequence quickly converges to the expected waveform. The following plots show the result of summing the first couple of terms:
The plot also sheds some light on why the approach works: every next element of the sum crosses through the same zero points as the preceding one, but then sums destructively with the peaks of the earlier signal — essentially increasing the zero-crossing slope but flattening the humps.
By the time you get to a six-element sum (ending with the fundamental frequency times 11), the waveform looks close enough to the real deal for almost all intents and purposes; at that point, the average approximation error is less than 10%, and the rising and falling edges are within 3° of vertical. The inclusion of further multiples offers diminishing returns as a consequence of the increasing divisor in the underlying expression.
You can also observe this the other way round. The fast Fourier transform (FFT) is an important mathematical tool that deconstructs complex signals into a spectral view of sine wave frequencies. Using an oscilloscope to apply FFT to a real-world 1 MHz square wave yields the following plot:
The vertical scale of the spectrum plot is logarithmic; the gradations are 6 dB apart, which is a needlessly complicated way for electrical engineers to say that each line corresponds to a 2× change in signal amplitude. You can clearly see that the fundamental frequency is followed by a series of peaks at odd harmonics (3×, 5×, 7×, etc). The second peak has an amplitude of about 1/3rd of the fundamental frequency (a difference of 9.6 dB); the third one is about 1/5th; and so forth. The sixth spike at 11 MHz has an amplitude of just about 9% of the first one (-20.8 dB).
The ability to deconstruct square waves into a finite (and short!) sum of sine harmonics is surprisingly useful. For example, the approach allows you to intuitively explore the behavior of lowpass, highpass, or bandpass filters that attenuate and phase-shift some of these harmonics depending on their frequency. All you have to do is compute a new sum of a handful of appropriately-adjusted constituent waveforms.
To illustrate the qualitative worth of this approach, we can also revisit the topic of decoupling capacitors from an earlier article. We now know that a 10 MHz square wave with brisk (sub-10%) rise and fall times can be roughly modeled as a series of significant sine harmonics extending to about 110 MHz. This explains the difficulty of fully suppressing digital switching noise with a single capacitor: as discussed in that article, in the vicinity of 100 MHz, parasitic inductive characteristics of low-cost MLCCs become quite pronounced and significantly limit the current such a capacitor can source. By the time you get to 200 MHz, the components do very little decoupling at all.
👉 For the sine-wave representation of more complex signals, check out the article on DFT. For an explanation of why sine waves are common in physics, click here. For more on analog and digital electronics, please visit this index page.
I write well-researched, original articles about geek culture, electronic circuit design, and more. If you like the content, please subscribe. It’s increasingly difficult to stay in touch with readers via social media; my typical post on X is shown to less than 5% of my followers and gets a ~0.2% clickthrough rate.



