DACs and ADCs, or there and back again
A look at how digital-to-analog and analog-to-digital converters work - from resistor ladders to delta-sigma modulation.
In one of the earlier articles, I remarked that microcontrollers are eating the world. Even for rudimentary tasks, such as blinking a LED, a microcontroller is now cheaper and simpler to use than an oscillator circuit built with discrete components or implemented using the once-ubiquitous 555 timer chip.
That said, zeroes and ones don’t always cut it. Image sensors record light intensity as a range of analog values; a speaker playing back music must move its diaphragm to positions other than “fully in” and “fully out”. In the end, almost every non-trivial digital circuit needs some sort of a digital-to-analog converter (DAC) or analog-to-digital converter (ADC) to interface with the physical world.
Nowadays, DACs and ADCs are often baked onto the dies of popular microcontrollers. That said, it still pays to understand how they work; if nothing else, the knowledge will help you grasp limitations and shop for alternatives when the default option won’t do.
Binary-weighted DAC
The conversion of digital signals to analog usually boils down to taking a binary number of a certain bit length and then mapping subsequent integer values to a range of output voltages. To illustrate, for a 4-bit DAC, there are sixteen possible binary values (0000, 0001, 0010, …, 1111), and the circuit could map them to sixteen equally spaced voltage levels: 0/15 · Vsupply, 1/15 · Vsupply, 2/15 · Vsupply, …, 15/15 · Vsupply.
About the simplest practical way to implement such a conversion is the following resistor arrangement:
It should be evident that if the binary input is 0000, the analog output is 0 V because all the lines are connected to the ground; conversely, if the input is 1111, the output must be equal to Vsupply, because all the resistors are connected to the positive supply rail.
For inputs in between, it seems reasonable to assume that we’d get a resistor-weighted average, with each bit having half the influence of its more significant predecessor. That said, instead of making hand-wavy assertions, we can prove this symbolically.
To get going, let’s label the the junction voltage as Vjct and then write the current equations for each resistor. From Ohm’s law, the current through a resistor is just a matter of the voltage differential across the terminals of the component, divided by its resistance. If we use n to represent the lowest resistance in the series, we get:
Next, we can apply Kirchoff’s current law — a fancy term for “what comes in must come out”, a rule that tells us that the sum of currents flowing into and out of the junction on the right must balance out because electrons can’t vanish or materialize out of thin air. From Kirchoff’s law, assuming no substantial output loading, IA + IB + IC + ID must be equal to 0 A.
Combining all five equations, we get:
To simplify the formula, we can move all fractions to a common denominator:
The next step is to multiply both sides by 8n to get rid of the long fraction, then gather all the common Vjct terms:
With this done, it’s trivial to solve for Vjct:
The voltages VA, VB,VC, and VD correspond to the digits of the binary number we’re attempting to convert, while the junction voltage is equal to Vout.
This equation tells us that the output voltage of the circuit is influenced by the input signal identically to the influence of subsequent digits in the positional numeral notation for binary numbers: the least significant bit counts the least, the next bit counts twice as much, etcetera. And indeed, if the binary input value is 0001, the formula predicts an output voltage equal to 1/15th of the supply; if the input is 1110, we get 14/15th of the supply.
The most significant issue with this DAC architecture is that the required resistor values quickly get impractical. To avoid high idle currents, the resistance attached to the most significant bit (MSB) can’t be too low; 1 kΩ is a sensible starting point. But then, for a basic 16-bit DAC, this puts the LSB resistor at 1 kΩ · 215 ≈ 32 MΩ; for 24-bit resolution, we’d need tens of gigaohms. Precise resistances of this magnitude are difficult to manufacture on the die of an integrated circuit — doubly so if they need to have the same temperature coefficients.
R-2R DAC
A clever workaround for this issue is the R-2R DAC architecture:
This circuit is less intuitive than its predecessor, but it works in a similar way. To decipher the design, let’s start with the section at the bottom — the two horizontally-placed resistors for bit #0.
By the application of basic intuition (or a quick analysis from Kirchoff’s current law), we can treat these two resistors as functionally equivalent to a single 1 kΩ that’s connected to some synthetic, combined input voltage. This voltage is equal to 0 V if LSB = 0; and to Vsupply/2 if LSB = 1. In other words, we have a synthetic input signal equal to 50% of the value for bit #0.
If we make this substitution, we end up with the circuit shown below on the left. Further, because the bottom part now features two 1 kΩ resistors in series (red), it’s equivalent to the 2 kΩ single-resistor variant on the right, so we can make the simplification shown on the right:
At this point, notice that the situation for bit #1 on the new schematic is analogous to the original analysis we performed for bit #0. The bottom now features a 2 kΩ resistor connected to the corresponding binary input, plus a 2 kΩ resistor feeding from the previously-derived synthetic voltage. In effect, we get a 50% mix of both signals; no matter what’s going on above, we can substitute this portion with a single 1 kΩ hooked up to a yet another synthetic input that combines one half of one half of bit 0, and one half of bit 1:
This process can continue; it should be clear that after the final iteration, we’re bound to end up with an output voltage that’s 50% (1/2) of bit #3, 25% (1/4) of bit #2, 12.5% (1/8) of bit #1, and 6.25% (1/16) of bit #0. Once again, this is a binary-weighted output scheme, this time with no need for high resistor values.
For the R-2R DAC, a small departure from the ideal model is that if the input is 1111, the output is 15/16 Vsupply; this is because some of the upper range is lost to the initial pull-down resistor at the bottom of the ladder.
Although the designs of both DACs are fairly straightforward, the circuits have issues with linearity at higher resolutions, especially past 10-12 bits. This is because resistors can be made with accuracy as good as 0.1%, but in a 16-bit DAC, the influence of the LSB is supposed to be just 0.003% of MSB. If the MSB resistor deviates by 0.1% from the intended value, this is more than enough to throw the whole scheme badly out of whack.
Oversampling averaging DACs
The aforementioned linearity problem led to the development of the so-called oversampling averaging DACs. Such devices output lower-resolution but fast-changing pulses much faster than the intended signal frequency. Then, a lowpass filter on the output averages the values to produce a greater range of slower-changing intermediate voltages.
As a rudimentary illustration, we can consider a 1-bit DAC that produces a rapid train of zeroes and ones. In the first half of the experiment, the duty cycle of the signal is 20% (i.e., 1-0-0-0-0); in the second half, it shifts to 80% (1-1-1-1-0):
The performance of this circuit can be improved if the modulation scheme is revised to take into account the anticipated “error” in the capacitor charge state. For example, around the 200 ms mark, we’re trying to move from 1 V to 4 V; we know that the capacitor voltage will be initially badly off, so it’d be better to briefly switch to 100% duty cycle instead of blindly sticking to 80%. This error-compensating scheme is known as delta-sigma modulation and is shown below:
Note that this circuit converges on the target voltage much more quickly than before.
Although oversampling averaging DACs can be constructed with more than two high-frequency output levels, the benefit of the 1-bit architecture is its excellent linearity; the performance doesn’t depend on the relative values of output resistors and is influenced only by the accuracy of the timing signal. As it happens, accurate timing is much easier than the production of ultra-precise resistors.
Another interesting property of the 1-bit design is that although rapid binary switching produces more noise, such high-frequency noise well above the DAC’s signal band is easier to filter out than the remnants of the slower-changing staircase pattern produced by conventional DACs.
The main trade-off of the oversampling DAC is that it’s inherently slower than other architectures; this is because the device needs multiple clock cycles to produce the correct output voltage, whereas a binary-weighted or R-2R DAC can do it in a single step.
Flash ADC
We’re now ready to cover analog-to-digital converters. Compared to going the other way round, converting analog voltages to binary numbers is a fairly involved affair. About the only practical way to make an instantaneous digital measurement of an input signal is to use one voltage comparator (an open-loop op-amp) per every quantization level desired, for example:
These reference voltages can be produced by a long, multi-resistor voltage divider that straddles the positive supply rail and the ground. The output from the ADC needs to be converted to a binary number representing the highest triggered comparator, but this is a pretty simple affair, and it’s usually handled by a digital circuit known as a priority encoder that we won’t be dwelling on here.
Such “flash” ADCs are sometimes used in specialized applications where speed is paramount, but the size of the circuit grows exponentially with the number of bits we want to capture. With this, grows the chip’s power consumption, input capacitance, and so forth. For these reasons, such ADCs usually not made for resolutions higher than 4-8 bits.
Pipelined subranging ADC
This brings us to what’s called a pipelined subranging ADC. Such an ADC uses a low-resolution (e.g., 3-bit) flash converter to resolve a couple of bits, and then an internal DAC to produce the corresponding quantized voltage that’s subtracted from the input signal. The result of this subtraction is called a “residue” and represents just the error between the initial coarse quantization and the true input value — i.e., the remainder that couldn’t be faithfully represented by the binary output from the first stage:
For a discussion of how to construct a subtractor (aka a difference amplifier), refer to this companion article.
In the above schematic, the residue represents the component of the signal that, in a hypothetical 6-bit ADC, would have ended up in the three less-significant bits of the digital reading. We only have three bits at this point; the first of the missing bits would represent one-half of the current quantization step (i.e, 1/16th of the supply voltage), the next bit would be one-fourth (1/32th of supply), and the last one would one-eighth (1/64th).
In our pipelined design, this fractional information can still be recovered by amplifying the residue 8× and then feeding it to an identical ADC stage that quantizes this signal to resolve three additional bits. The process can be repeated a couple of times before the analog errors become too great; this type of an ADC can usually achieve resolutions of 12-16 bits.
Slope-based (“integrating”) ADC
The problem with flash and pipelined ADCs is that they depend on having a series of very accurate voltage references. A different approach is to use a single comparator combined with a reference signal that ramps up over time in a predictable way. A rudimentary example may be a capacitor that is being charged through a resistor; the time elapsed from the beginning of the charging process to the comparator triggering can be used to infer the unknown input voltage.
In practice, because the charging curve of a basic resistor-capacitor circuit is nonlinear, the reference signal is more commonly provided by an integrator. An integrator is a fairly interesting op-amp circuit that sums input voltages over time; a detailed analysis of the architecture can be found in this companion article.
In the most basic design, the time elapsed between the beginning of the charging process and the triggering of the comparator depends not only on the input voltage, but also on the exact slope of the reference signal, which in turn depends on internal resistances and capacitances. Because getting these components perfectly right is tough, we usually rely on a refined design called a dual-slope ADC. The idea is to expose the integrator to the input voltage for a preset time t1, and then measure the time t2 it takes to return back to the baseline once connected to a known reference signal.
Because both the charging time and the discharging time depend on the values of R and C in a perfectly symmetric way, the influence of these parameters cancels out and the unknown input voltage can be calculated just from the ratio of t2 to t1.
Successive approximation register ADC
Slope-based ADCs are very accurate and exhibit low noise, but they tend to be slow. One way to improve their performance without inventing a completely new architecture is to employ a bit of digital trickery, known under the trade term of a successive approximation register (SAR).
SAR ADC uses an internal instead of an integrator DAC to generate reference voltages. Instead of producing a linear ramp, it starts at a midpoint (Vsupply/2) to instantly determine if the input signal is above or below that threshold. If the comparator indicates that the input is higher, the ADC control logic can immediately rule out the entire lower half of the range and repeat the test at the midpoint of what remains (3/4 · Vsupply). The successive halvings of the search space allow a precise measurement to be achieved in just a couple of steps; the approach is functionally to the binary search algorithm that should be familiar to all computer science buffs.
The price to pay is some loss of precision due to DAC linearity errors, along with a bit of digital switching noise. Nevertheless, SAR converters often offer a sensible middle ground between fast but less accurate pipelined ADCs and very slow integrating ADCs, and they’re a common choice for the built-in converter on an MCU die.
Delta-sigma ADC
We’re not quite done yet: the most interesting trick in the ADC playbook is delta-sigma modulation. We’ve first encountered the term in the context of DACs, but in the ADC world, it’s done in a considerably more wacky way.
In its most basic variant, a 1-bit delta-sigma ADC rapidly outputs a train of logic “0”s or “1”s. The momentary output of the ADC — zero or one — is then used as a part of an unusual feedback loop that computes the difference between this binary signal and the input voltage:
In most circumstances, the analog input is not equal to either of the two digital extremes, so the difference amplifier at the front section of the ADC (left) outputs a large momentary positive or negative error value.
These momentary, computed errors are then fed into an integrator. This component, as discussed earlier on in, essentially sums the errors over time. If the input signal is more positive than the current binary output, the integrator’s output voltage gradually creeps up; if it’s more negative, the voltage sags.
Note that if the input voltage is ⅓ of Vsupply, the momentary error caused by a logical one is -⅔ · Vsupply, while the error caused by a logical zero is just +⅓ · Vsupply. It follows that to keep the error from growing, we’d need to output two zeroes for every one — a 30% duty cycle.
To close the feedback loop, the summed error is fed onto the positive leg comparator that produces the actual output bit stream. In essence, if the summed error over time is positive — i.e., the ADC was previously outputting zeroes too often — the comparator is compelled to start producing “1”s. Conversely, if the summed error is negative (too many ones), the output stage starts producing “0”s:
Again, the circuit seeks to minimize the accumulated error, so so the duty cycle of the pulse train changes in proportion to the input voltage.
Although delta-sigma might seem like a fairly unhinged approach to making precise signal measurements, the 1-bit stream produced by the comparator can be digitally processed to reconstruct the underlying analog input value. The simplest approach is just a standard moving average, although weighted averaging (most commonly, the sinc filter) will produce better results.
Best of all, because there are comparatively few potential sources of analog errors, the linearity is superb. On the flip side, to achieve reasonable precision, the clock used to operate the ADC must be much faster than the desired analog sample rate, so these ADCs are typically not as fast as SAR.
👉 For more articles on electronics, click here.
I write well-researched, original articles about geek culture, electronic circuit design, and more. If you like the content, please subscribe. It’s increasingly difficult to stay in touch with readers via social media; my typical post on X is shown to less than 5% of my followers and gets a ~0.2% clickthrough rate.











The 1-bit pulse train thing is also a common way to build an amplifier. Instead of trying to amplify a delicate analog signal, people just feed the output of a sigma-delta DAC into some beefy MOSFETs and perhaps a low pass filter if the're feeling generous. The output has next to no distortion, because the power transistor never sees the analog waveform.
As a bonus, because the transistor is always full on or fully off, the efficiency approaches 100%, and only very minimal cooling is needed.
A similar trick is used for radio transmitters: Most transmitters that work by feeding a square wave at around into big MOSFETs driving a band-pass filter. That's enough for FM, but for AM, they feed power though a another MOSFET driven by a PDM signal and a low-pass filter.
This gives an exceptionally clean output, again at near 100% efficiency, which is important when putting out kilowatts or megawatts.
ADC/DACs are still relevant, but I'd say digital is eating the world! Your two analog examples are actually great use-cases for all digital signal chains.
For image capture, you get much better dynamic range and lower noise if you capture pixels as single bits integrated over time. So, for example, you charge up a pixel array and then check all the pixels periodically to see which have flipped. Brightly illuminated pixels flip quickly. You get to pick how long you want to wait. At the limit, you can have pixels that are single photon detectors/counters (these actually exist today, just too low res for cameras... for now). Light itself is inherently digital, better to process it in its native domain! :)
The most efficient and lowest distortion way to drive a speaker is similarly digital. You basically send a series of bits at a much, much higher frequency than the highest audio frequency thought the entire amp chain and then let the inductance of the speaker coil integrate all the bits into the movement of the cone. Because the amps are driven rail to rail with very well defined edge transitions, they spend as little time as possible in their noisy and power hungry non-linear region. Here is also no noise to get amplified- you just get out much stronger versions of the exact same bits you put in.