# The 101 of analog signal filtering

### Some intuition about this topic can be developed without summoning the ghost of Pierre-Simon Laplace.

Signal filters are ubiquitous in electronics; on this blog alone, they cropped up in articles on digital-to-analog converters, radio receivers, audio amplifiers, and probably more.

In principle, the relative simplicity of these circuits should make the underlying theory easy to self-study; in practice, most introductory texts rapidly devolve into a foul mix of EE jargon and calculus. On Wikipedia, unsuspecting visitors are usually greeted by some variant of this:

“We wish to determine the transfer function H(s) where s = σ + jω (from Laplace transform). Because |H(s)|² = H(s)H(s)¯ and, as a general property of Laplace transforms at s = jω, H(−jω) = H(jω)¯, if we select H(s) such that:\(H(s)H(-s) = \frac {{G_0}^2}{1+\left (\frac{-s^2}{\omega_c^2}\right)^n}\)

The n poles of this expression occur on a circle of radius ωc at equally-spaced points, and symmetric around the negative real axis. For stability, the transfer function, H(s), is therefore chosen such that it contains only the poles in the negative real half-plane of s. The k-th pole is specified by …”

There are some aspects of signal processing that require solving novel integrals or navigating the complex S-plane — but in today’s article, let’s try a gentler approach. The following write-up still assumes familiarity with electronic concepts such as capacitance and reactance, so if you need a refresher, start here and here.

### Prelude: charging and discharging a capacitor

Have a look at the following circuit:

Let’s assume that the capacitor is discharged and that the circuit’s input terminal gets hooked up to a standard, benchtop voltage supply. In this setup, the current through the resistor (blue) will initially shot up to I = V/R, but then gradually decay to zero. Meanwhile, the voltage across the capacitor’s terminals (yellow) will increase from 0 all the way to what’s present on the input leg — in my example, 48 volts:

Intuitively, the behavior makes sense: a discharged capacitor is eager to sink any current you can spare; a fully-charged one will have none of your shenanigans. But in between these extremes, what explains the exact shape of the voltage and current curves?

Well, it’s not that the capacitor itself is a nonlinear device! The immediate culprit is actually the resistor: from Ohm’s law, the current flowing through it is always proportional to the voltage across its terminals. One leg of the resistor is connected to a fixed-voltage supply, so the potential on that side does not change; but the other leg “sees” the capacitor’s charge state, voltage rising over time. In effect, this negative feedback loop causes the resistor to progressively choke off the charging current, producing a pattern of exponential decay.

There is a simple way to confirm this theory. If we connect the circuit to a benchtop supply operating in the non-default constant-current (CC) mode, the voltage across the capacitor will ramp up in a straight line — at least until the supply or the capacitor gives up:

In the constant-current scenario, the voltage across the capacitor’s terminal is simply equal to the charging current multiplied by the elapsed time, and then divided by capacitance:

The solution for the constant-voltage case is more complicated. The current is continuously changing in response to what happened before, so to derive an exact formula, we need to solve an integral. This boils down to splitting the charging process into infinitesimal time slices and then summing the result. I’ll spare you the calculus; the result is:

In effect, the capacitor’s charge state varies only with *t *in proportion to RC (*e* is a mathematical constant). At *t = 0.7 RC*, the capacitor is ~50% charged; at *t = 3 RC*, the process is about 95% done; and at *t = 5 RC*, the charge state is 99+%.

To test this in practice, let’s assume R = 10 kΩ and C = 1 µF; as per the formula, 50% charge should be reached in ~7 ms. And indeed, we can easily observe this by supplying a 5 Hz square wave on the input leg:

At that frequency, the output still resembles the input, although the edges of the square wave are markedly rounded. At higher frequencies, the capacitor never charges to an appreciable extent, so the output amplitude is significantly attenuated — and the waveform consists only of the initial, nearly-constant-current portions of the charging slope:

### The case of the sine waveform

The above R-C circuit is a lowpass filter: it lets low frequencies through while attenuating higher-frequency components. Some readers are probably expecting that I’m about to reveal the *real* lowpass filter: a better design that doesn’t distort the shape of the square waveform. But nope — for the most part, what we have here is the real deal.

As it turns out, most analog filters are relatively distortion-free only in one special case — a perfectly steady sine waveform:

The special status of this waveform has a neat mathematical basis: integration (and differentiation) of a sine function yields a sine — or more precisely, its phase-shifted sibling, a cosine. Other common waveforms in electronic circuits don’t share this property — and get mangled in various ways.

The mangling is not always an issue. In radio applications, the transmission is usually a sine signal; it is modulated, but the rate of modulation is much slower than the carrier frequency. In other words, in the local view, we’re dealing with a near-perfect sine, and the filter-caused distortion is negligible.

In audio and video, the signals are seldom pure sines; for example, many instruments produce sounds closer to sawtooth or square waves. That said, the distortion produced by analog filters tends to mimic natural phenomena. As a practical example, audio equalizers are useful for compensating for the acoustics of the listening environment even if they don’t do anything especially logical to the notes played on a trumpet or a viola.

In other cases, the distortion can’t be ignored, but it can be quantified and accounted for. Even if one is not proficient with calculus, it’s usually sufficient to recast more complex waveforms as a sum of sine harmonics. This is trivial for square waves — and fairly straightforward in the more general case, too.

### But back to the lowpass filter…

Right! The sine wave scenario is neither constant voltage nor constant current, so the capacitor charge formulas discussed at the beginning of this article no longer hold.

Thankfully, the solution for sine waves is actually simpler — no calculus. Let’s start by imagining what would happen if we replaced the capacitor with another resistor, R2:

It should be fairly clear that this is a simple voltage divider; the output is a tap placed somewhere between V_{in }and GND, producing a voltage swing proportional to the ratio of resistances:

Now the fun part: recall from the discussion of impedance that a capacitor in the presence of a sine wave signal exhibits behavior that resembles frequency-specific resistance. It’s known as* capacitive reactance *and is described by the following formula:

Perhaps, for a given sine frequency *f, *we could treat the filter as a voltage divider formed by R and C?

Well, not exactly: we can’t just take the earlier voltage divider formula and replace R2 with X_{C }because the capacitive and resistive effects are not aligned in phase; they’re exactly 90° apart. That said, per the second article on impedance, the “sum” of resistance and reactance can be mapped out in the complex plane and calculated using the Pythagorean theorem. In the end, the formula only needs a small tweak in the denominator:

Let’s plot this equation across a range of sine wave frequencies for R = 10 kΩ and C = 1 µF:

The curve may seem a bit weird, but that’s just a graphing quirk. The attenuation behavior doesn’t taper off at high frequencies; it just looks this way because on a linear scale, the visual distance between 100% and 50% is much larger than between 10% and 5%. To avoid confusion, it’s more common to use log scale:

We can see that this particular filter lets through frequencies up to about 8 Hz with little attenuation. Then, there’s a fairly pronounced knee down to around 25 Hz, followed by a fairly straight slope. On that slope, with every doubling of the signal frequency, the amplitude of the output signal is halved.

(Electrical engineers like to obscure this simple relationship with weird mixed units such as “-20 dB per decade” or “-6 dB per octave”. It’s the same thing.)

In the filter, R is constant; meanwhile, X_{C }approaches infinity at 0 Hz and then decreases with signal frequency. It follows that at some specific sine frequency *f _{c}*, the two values briefly become equal. We can find this crossover point pretty easily:

This spot is (somewhat arbitrarily) called the *cutoff frequency. *From the earlier formula for voltages, we can see that at this exact location, the input signal is attenuated to √½ (~70%) of the original amplitude:

This corresponds to the halving of the power delivered to the load (say, a speaker) — and also coincides with the midpoint of our simple RC filter’s phase shift (45°).

Although the ~70% cutoff point has some nice mathematical properties, we can also solve the formulas for 50% attenuation, or any other point of our choice. For 50%, the calculation is:

And the associated 50% amplitude reduction frequency is:

### Taking the high road

A highpass filter that attenuates low frequencies can be constructed in a very similar way — all we have to do is switch the capacitor and the resistor around:

In this layout, the capacitor admits only rapidly-changing currents, but blocks DC. The sine wave behavior is essentially identical (but inverse) to that of the lowpass circuit; this includes the same formula for *f*_{c }at R = X_{C}.

The behavior with non-sine waves is another matter. A high-frequency square wave signal will pass through with edges intact, but some DC voltage decay in between:

At low frequencies, the signal is more severely mutilated — only the edges remain, appearing as short voltage spikes of alternating polarity:

The knowledge of this pattern is useful for spotting accidental* *highpass filters in the circuit. This can happen when a series “capacitor” is formed by a broken wire or a cold solder joint on a digital data line.

### Higher-order filters

The filters discussed so far exhibit fairly gentle frequency rolloff behavior. This is in contrast to an idealized “brick wall” filter, which should pass through everything on one side of a chosen frequency, and then completely block everything on the other side.

The angle of the frequency attenuation slope can be increased by stacking RC filters in series. That said, this inevitably increases attenuation in the filter’s “pass” region, increases phase shift and signal delay — and still leaves a relatively pronounced, rounded knee around the cutoff frequency.

The common solution to this problem is to make the filter somewhat resonant in the vicinity of the cutoff frequency — be it with an extra inductor or an op-amp feedback loop. In the presence of a steady sine signal, the approach adds a region of apparent amplification due to the constructive interference between the current signal and the echo of the previous cycle. This makes the knee more “pointy”, although at the expense of the filter behaving even more chaotically if presented with something that isn’t a well-behaved sine.

The design of such filters is where the math gets heavy. Insufficient dampening or poor phase characteristics can yield wildly inconsistent frequency response curves — or even sustained oscillations. That said, most of the time, filter design is done by following “cookbook” formulas based on already-proven theorems and pre-solved calculus. Of special note is the Butterworth filter formula, offering the optimal filtering characteristics if you can’t tolerate frequency response ripples; and the Chebyshev filter, which offers the best performance for a chosen amount of ripple on one end.

But op-amp filters are probably a story for another time…

*If you liked this article, please subscribe! Unlike most other social media, Substack is not a walled garden and not an addictive doomscrolling experience. It’s just a way to stay in touch with the writers you like.*

*For a thematic catalog of posts on this site, click here.*

This is a really nice and gentle introduction.

It is worth introducing the fact that capacitors especially are non-ideal. In an intro, that can be an offhand remark, but as soon as somebody leaves the audio frequencies, this becomes a really big deal. A 10µF capacitor, for instance, can easily have a self-resonance well below 1MHz. Without understanding parasitics, all kinds of real-world circuit behavior is highly mysterious. With a simulator in hand and some decent models for imperfect components, this is much easier.

I made a video just now on some testing I did because of how hard this bit me:

https://www.youtube.com/watch?v=fsE0bIpDPeU

Greak Work! Fun fact, Filtering is also an important precursor to neural networks, infact a single layered nueral network today is nothing but an adaptive filter.