Deep dive: the instability of op-amps
A closer look at op-amp feedback loops and the stability criteria for circuits that use them.
This article assumes some familiarity with signal amplification and the knowledge of basic op-amp circuits. If you need a refresher, start here.
In a recent write-up on photodiodes, I mentioned that adding a capacitance in an op-amp’s feedback loop can spell trouble: it might cause ringing, sustained oscillations, or excessive gain.
It’s easy to offer a hand-wavy explanation of this phenomenon, but the mechanics of amplifier feedback loops are worth a closer look: control feedback is ubiquitous in analog signal processing and can be tricky to get right. Alas, the underlying theory is rather inaccessible to hobbyists; you’re either getting sucked into the discussion of zeroes and poles of complex-number transfer functions, or into obtuse jargon along the lines of this snippet from Analog Devices:
"The amplifier’s stability in a circuit is determined by the noise gain, not the signal gain."
What does noise have to do with stability? Not much! If you’re curious, the term has to do with the habit of modeling real-world op-amps as an ideal amplifier with a voltage noise source attached to one of its legs. For some reason, Texas Instruments insists it has to be the non-inverting leg:
“Input voltage noise is always represented by a voltage source in series with the noninverting input.”
So… is “noise gain” just a trade term amplification on that leg? Do op-amps have different gains on each input? How do you set this gain, or where do you find it in the spec?…
All these abstractions make sense in the context of mathematical models, but they don’t build intuition about the real world. In today’s article, let’s talk about feedback loops in a more accessible way. We’ll get to the same results, but hopefully in a less confusing way.
Back to the basics: the open-loop amplifier
Before we proceed, let’s revisit the following rudimentary op-amp circuit:
If you’re here, you should already know the basic behavior of an op-amp: it takes the difference between the two input voltages, amplifies that by some very large number (open-loop gain, AOL), and then outputs the amplified difference as another voltage. Strictly speaking, the output is typically referenced to the midpoint between the power supply rails — let’s call it Vmid — so the complete formula is:
Because AOL is normally huge — 100,000 or more — there’s just a microvolt-range linear region around Vin+ ≈ Vin-. Any input difference larger than that is bound to send the output all the way toward one of the supply rails.
Equipped with this knowledge, we can trivially simulate the behavior of our circuit, keeping in mind that Vin- is permanently tied to 2.5 V:
That said, there is a kink in this model: the capacitances, inductances, and resistances inside any real-world op-amp will limit the chip’s response speed. One prominent effect is that past a certain point, the apparent internal amplification of the device starts halving with every doubling of the frequency of an input sine wave. It eventually reaches 1x (unity gain) — and past that point, you’re stuck with an attenuator instead of an amplifier:
(Although the detail is unimportant, some readers might recognize that this is the curve of a first-order RC lowpass filter — as discussed in one of the earlier articles.)
The voltage follower
Because constantly swinging between the supply rails is not a particularly useful feat, op-amps are typically employed with some form of a negative feedback loop. The simplest example — a voltage follower with an apparent 1x (unity) gain — is shown below:
At a glance, the circuit seems simple. The output voltage is looped back onto the inverting input, so if Vin+ > Vout, the differential signal is positive and the output starts creeping up; conversely if Vin+ < Vout, the voltage starts moving down. The process stops once the circuit reaches an equilibrium between all three voltages — essentially forcing Vout to march in the lockstep with the input signal:
We often take the mental shortcut of assuming that the circuit’s equilibrium is reached at Vin+ = Vin- = Vout — but that’s not actually possible. Remember that at its core, the op-amp is doing the following math:
If the differential voltage is zero, then Vout is constant, no matter the value of AOL. But if Vout (and by extension, Vin-) are stuck in place, then not only is the circuit not doing anything useful, but the differential voltage can’t actually be zero… at least not for most of the possible Vin+ voltages. Oops!
What happens in reality that the balance is reached with Vin- and Vout at a slight offset from the input signal. This offset acts to nudge the amplifier just enough to reproduce the input signal when the “error” between Vin+ and Vin- is multiplied by AOL. Crucially, if the error signal is too large or too small, the negative feedback loop rectifies the problem in the blink of an eye, pulling Vout in the opposite direction until the balance is restored.
We can figure out the exact magnitude of these differential-voltage nudges if we take the op-amp equation and just substitute Vin- with Vout. This substitution is fair game because in the follower circuit, the two pins are tied together:
The analysis partly vindicates the mental shortcut we have talked about before: when AOL is very high, it follows that 1 / (1 + AOL) ≈ 0 and AOL / (1 + AOL) ≈ 1, so the last expression is kinda-sorta equivalent to:
That said, although the scale of the “error” nudges is often negligible, ignoring the phenomenon makes the behavior of op-amps difficult to grasp. Just as importantly, as the input signal frequency increases, the op-amp’s internal gain drops, so Vout must drift farther away from the target for the sufficient differential voltage to appear across the input pins. In other words, at high frequencies, the output voltage error caused by feedback can get quite high.
Understanding loop gain
One important observation we ought to make is that the addition of a feedback loop did nothing to change the op-amp’s gain. We just make Vin- track Vin+ very closely to muzzle the IC’s wild instincts. That said, even in the closed-loop configuration, the chip’s internal amplification remains the same — AOL — and for the sake of signal accuracy, we want it to stay that way!
As we established earlier, the control signal — the error piggybacking on Vout — is always as small as the op-amp’s internal gain permits. In effect, the feedback loop can be thought of having its own gain, separate from any signal amplification configured by the designer of the circuit. And although what’s going on in in the feedback loop is typically hidden from view, any noise picked up there will appear on the output pin too.
Conceptually, the magnitude of loop gain is inversely proportional to the vertical distance between the op-amp’s intrinsic amplification curve and the circuit’s configured signal gain (in our example, 1x):
It might not be immediately obvious why loop gain is reduced when signal gain goes up, but it’s informative to look at the usual way of adding gain to a voltage follower:
The operating principle of this circuit is simple: the resistors attenuate the output signal fed back at Vin- by a factor of two, so Vout has to swing twice as much to achieve the same result on Vin-. We get twice the output amplitude — but the gotcha is that we also halved the “error” that piggybacks on Vout and actually makes the circuit work! The op-amp has no way to compensate for the loss, so the magnitude of the error nudges in the feedback loop will need to increase.
At this point, we also have a preliminary explanation of one of the more counterintuitive properties of op-amps: that instability and some types of noise problems can get worse at lower signal amplification rates (because of higher loop gain).
The tape-delay op-amp circuit
Next, let’s consider the following unorthodox design:
It’s a voltage follower — except for a contraption that records output voltages and plays them back some time later on the Vin- side, delaying the feedback signal by a preset amount.
To explore this circuit, I built a toy discrete-time model with a limited bandwidth and a feedback loop that lags by five “ticks”. I then supplied a single input pulse on Vin+ and let the simulation run:
For the first five ticks, Vin+ (blue) is seeing a positive pulse while Vin- (red) is staying put due to the delay. As a consequence, Vout shoots up — although because of the baked-in bandwidth limit, it does not immediately jump all the way to +5 V.
After five ticks, Vin+ goes back to the mid-point and stays there for the rest of the simulation — but all of sudden, the delayed echo of the last pulse arrives on Vin-. It pulls the differential voltage negative, thus producing an inverted echo on the output. This back-and-forth process continues for a while, with gradually decaying amplitude.
The most important observation is that past the initial stimulus, the entire action is contained to the feedback loop. There is no movement on the signal input (Vin+), so the configured signal gain plays no role. The only reason the oscillations decay over time is that the gain of the feedback loop itself happened to be a tad below one.
If we increase loop gain a tiny bit, we get this result:
In the end, there are two conditions that must be met for sustained oscillation:
There must be a feedback delay of about one-half of a wavelength (180°) at some sine-wave frequency.
The feedback loop gain at that frequency must be at least one.
The first requirement means that higher frequencies are of more concern, because it’s easier to accidentally introduce comparatively large delays when the wavelength you’re working with is short. The second requirement means that we generally only have to worry about the frequencies below the IC’s internal unity-gain point.
But what kind of a weird contraption is that?!
Right. Tape-delay mechanisms are not commonly employed in op-amp circuits. Non-negligible transmission delays can sometimes crop up in electronic designs, but feedback loops are usually pretty compact on any sensibly-designed PCB. Unless you’re working with ultra-high frequencies or very long wires, signal propagation times a relatively remote concern.
Yet, there is another, more ubiquitous form of a “delay” mechanism: the capacitor. To be clear, a capacitor is not really a time delay device: it can’t store and replay past waveforms. Instead, it’s an integrator — that is, its terminal voltage is a function of the charging currents that flowed through it before. It might sound math-y, but it’s akin to how a bucket in your backyard “integrates” rainfall over time, “outputting” it as a water level you can observe.
Now, integration distorts most waveforms — but in the special case of a sine, it just produces a phase shift:
The phase shift is between the charging current and the output voltage. Current measurements are seldom used for signaling, but if the capacitor is charged and discharged through a resistor, the amps flowing across the resistor’s terminals are proportional to the applied voltage — so we get some degree of voltage-to-voltage shift. A detailed discussion of this phenomenon can be found in an earlier article on RC filters, so I won’t repeat it here — but in essence, the circuit adds from ~0° to ~90° phase shift, depending on signal frequency.
This phenomenon can be observed when shunt capacitances are added to op-amp feedback loops, even without any discrete resistors. That’s because the op-amp’s output stage can only supply a limited output current, and it behaves like a perfect voltage source in series with a built-in resistor.
On the surface of it, even the worst-case RC filter shift (~90°) is nowhere near the ~180° offset needed for sustained oscillation — but op-amps have some internal phase shifts too. The devices are usually designed so that their bandwidth rolls off just ahead of the danger zone, leaving just a bit of phase margin for external components. With some ICs, even as little as an extra -45° can be enough to put you in the red.
When that happens, short of reducing the capacitance, some of the hacks may involve reducing the bandwidth of the feedback loop; bumping up signal gain; or switching to an amplifier that has more of a phase margin at the frequencies of concern.
If you liked this article, please subscribe! Unlike most other social media, Substack is not a walled garden and not an addictive doomscrolling experience. It’s just a way to stay in touch with the writers you like.
For a thematic catalog of articles on electronics and other topics, click here.
Looks like this story made it to HN, and one of the commenters asserted the following:
"Phase diagrams + OpAmp phase shift specs / phase margin are what you need to predict instability."
This isn't the point of this article. What the commenter is alluding to is a simple explanation that doesn't actually explain much. It either gives you the intuitive (but wrong!) impression that the stability criteria just involves open-loop gain + phase shift; or it does bring up the delta between open-loop and closed-loop gain, but doesn't say why it works that way.
My goal was to peek under that rock and shine some light on what's actually going on in op-amp feedback loops, because the intuitive models we come across are usually wrong - and most of the more Biblically-correct online explanations either pull you into Laplace transforms and complex-number algebra, or assume you're really familiar with some of the wacky terminology in control theory.
Some of the earlier articles do feature phase plots, e.g.: https://lcamtuf.substack.com/p/the-101-of-analog-signal-filtering