Clocks in digital circuits
How do electronics keep track of time - from RC oscillators, to quartz crystals, to phase-locked loops.
Clock signals are the backbone of modern computing. Their rising or falling edges synchronize CPU state transitions, assist in shifting bits in and out on data buses, and set the tempo for countless other digital housekeeping tasks.
Operating complex systems off a single clock is usually impractical: a computer mouse doesn’t need to run at the same speed as the CPU. A typical PC relies on dozens of timing signals, ranging from kilohertz to gigahertz, some of which aren’t synchronized to a common reference clock. The clocks originate in different parts of the system and are dynamically scaled for a variety of reasons: to conserve energy, to maintain safe chipset temperatures, or to support different data transmission rates.
In this article, we’ll have a look at how digital clock signals are generated, how they are divided and multiplied, and where they ultimately end up.
Clock sources: RC oscillators
The most common digital clock source is an RC oscillator: a simple device constructed out of a capacitor, a resistor that sets the capacitor’s charge and discharge rate, and a negative feedback loop that flips the charging voltage back and forth. Because all these components can be easily constructed on the die of an integrated circuit, RC oscillators are commonly found inside microcontrollers and CPUs.
A naïve implementation of a square-wave RC oscillator might rely on a single NOT gate as a feedback mechanism:
The idea is simple. Let’s assume that the capacitor is initially discharged, so the gate’s input is at a logic zero. This sends the output voltage to a logic one — and in turn kicks off the process of slowly charging the capacitor via R. Eventually, capacitor voltage reaches the threshold for logic “1”, at which point, the output should flip to zero, starting the inverse process of discharging the cap. Charge, discharge, rinse, repeat?
Well, not exactly: in practice, the circuit will not function correctly with a standard NOT gate. Most of the time, the underlying analog nature of digital circuitry will rear its ugly head, and the circuit will reach a very non-digital equilibrium at Vcap = Vout = Vdd/2. As it turns out, a NOT gate is just a trashy high-gain inverting amplifier in disguise!
With some nudging and Vdd tweaks, the setup might eventually start to oscillate, but because only infinitesimally small nudges will be needed to move back and forth between logic “0” and “1”, the oscillation will be a chaotic mess:
So, how do we fix this? A particularly elegant and intuitive solution is the following rail-to-rail op-amp circuit:
First, let’s have a look at the non-inverting input: with three identical R1 resistors, the voltage on this leg is a simple three-way average of 0 V, the supply voltage (Vdd), and whatever the output of the op-amp happens to be at a given time. In other words, Vin+ can only range from 1/3 Vdd (if Vout = 0 V) to 2/3 Vdd (if Vout = Vdd).
Now, let’s examine the inverting leg. Assume the capacitor is initially discharged, so Vin- = 0 V. Because Vin+ >> Vin-, the output of the op-amp jumps to its maximum output voltage, and the capacitor begins to charge. The situation continues until Vin- reaches Vin+ (which sits at 2/3 Vdd).
At that point, the output voltage of the op-amp decreases — perhaps not all the way to down, but it drops a notch. This instantly pulls the Vin+ leg lower, and thus makes Vin- >> Vin+. This negative feedback loop causes Vout to plunge all the way down. The capacitor now begins to discharge — and will continue to do so for a while, because Vin+ is now sitting much lower, at 1/3 Vdd.
In effect, we have a binary circuit with hysteresis: the transition from “0” to “1” takes place at a much higher voltage than from “1” to “0”. With no stable equilibrium and two distant transition points, the arrangement functions as a good oscillator. The following oscilloscope plot shows the oscillator’s output voltage (yellow), along with the op-amp’s inputs: Vin- (blue) and Vin+ (pink).
A similar kind of memory / hysteresis can be built into CMOS digital logic. Logic gates with this property are described as having “Schmitt trigger” inputs. One example is the 74HC14 Schmitt trigger inverter, which can be used to build a working version of the ill-fated NOT gate oscillator discussed earlier on:
Clock sources: piezoelectric materials
Although RC oscillators are easy to make, they are not particularly accurate: frequency fluctuations of about 5% should be expected in casual use. This is often good enough, but it gets in the way of precise timekeeping, and may interfere with some clock-sensitive protocols, such as USB.
The solution is a peculiar electromechanical device: a laser-trimmed piece of piezoelectric material — often quartz crystal — that changes its shape in response to a voltage applied to its terminals, then returns elastically to its original dimensions once the voltage is removed. Much like a kids’ swing, if you push it at its natural resonant frequency, it’s easy to get it going and reach a considerable amplitude. But if you time your pushes wrong, the swing won’t go very far, and you might get your teeth knocked out.
The following video illustrates the moment of hitting the resonant frequency of a quartz crystal. At that point, the crystal’s AC impedance is at its lowest, and there is no phase difference between the driving signal (yellow) and the current measured across the device (blue):
Curiously, the behavior of the crystal around its resonant frequency is asymmetric. Delivering pulses a bit too slow has no dramatic effect, but delivering them slightly too fast hits what’s known as anti-resonance: a point where the impedance skyrockets and there’s very little current flowing through the device. The swing analogy is apt: if you grab the swing a tiny bit too late and hold it for a bit too long, all you’re doing is wasting some energy and artificially prolonging each period of oscillation. On the flip side, grabbing it too early ends in pain.
In between the two extremes, there’s also a region of phase reversal where the current through the crystal (swing speed) lags behind the applied voltage (motive force). The crystal briefly resembles an inductor, although it most certainly isn’t one:
The most logical way to build a crystal-driven oscillator would be to zero in on the region of minimum impedance. Yet, the most common architecture in digital circuits — the Pierce oscillator — settles on a slightly higher frequency and saves on components by exploiting some of the crystal’s inductor-like current lag. The practical implication is that if you purchase a crystal specified at its true (“series”) resonant frequency, it will run a tiny bit fast when connected to an MCU. The difference usually hovers around 100-300 ppm, which is nothing to sneeze at if you consider that the crystal’s usual accuracy is around 20 ppm.
Crystal oscillators can’t be easily manufactured on a silicon die, so they almost always come as discrete components, set apart by their shiny metal cans. It’s most common to encounter crystals between 1 and 40 MHz, although a peculiar value of 32.768 kHz is also a common sight (more about that soon).
Clock division
Modern computers operate at gigahertz frequencies, but such clock speeds are reserved only for a handful of system components, and only during peak demand. Gigahertz signals pose significant design challenges for larger circuits; plus, there’s no conceivable need for an audio chipset or a fan controller to run nearly that fast.
For this reason, it’s common for clock signals in digital circuits to be divided (“prescaled”) for specific subsystems. The simplest and oldest way of doing this is to clobber together a handful of logic gates and build a binary counter. This allows the signal to be divided by any power of two:
This brings us back to the curious case of the 32.768 kHz oscillator: it originated in the era of digital watches, where its signal could be trivially divided to get one second ticks. Today, thanks to higher transistor densities and better power efficiency, arbitrary divisions can be accomplished with countdown circuits that start from a preprogrammed value and then reload the register after reaching zero. That said, the 32.768 kHz clock still crops up all over the place, a darling of circuit designers and software engineers alike.
Clock multiplication
Clock division is not a cure-all; for one, it’s pretty hard to make traditional crystal oscillators with resonant frequencies above around 60 MHz. Although some shenanigans with harmonics can be pulled off, it follows that if a modern CPU needs to run off a stable clock while attaining gigahertz speeds, we might need to employ some clever trickery.
This trickery is accomplished with a circuit known as the phase-locked loop (PLL). A basic practical PLL design might look as follows:
The first stage is a phase detector, usually implemented as a couple of logic gates that test whether the feedback signal is leading or trailing the edges of the reference clock. Depending on the outcome of this comparison, the detector either slightly boosts or slightly reduces the charge of a capacitor in the charge pump. The third component is a voltage-controlled oscillator (VCO), which produces an output frequency in direct relation to the voltage presented on its input leg.
Despite their name, the bulk of what PLLs do is matching frequencies; phase matching is just a secondary perk. Consider that if the signals start in-phase but the VCO is running too slow, its edges will start trailing the reference clock, and the error detector will boost the voltage of the capacitor, speeding things up. In the opposite situation, the VCO will be leading the clock, and the detector will politely instruct the oscillator to calm down. If there’s a huge difference in phases or frequencies, the PLL might need a bit of time to stabilize, but it eventually does so.
Of course, a PLL constructed this way is not useful for clock multiplication; for that, the circuit requires a rather ingenious tweak:
The modification may seem counterintuitive, but it makes perfect sense. Let’s say the divider slashes the frequency of the feedback signal by a factor of two. Now, the phase error detector needs to push the VCO to twice the input clock frequency obtain phase lock. Neat, huh?
System-wide clock architectures
In the classical PC architecture, the processor’s job is to handle computation; most other tasks — from storing data, to drawing graphics on the screen, to handling I/O ports — are delegated to other chips. Although there is a trend toward greater integration, in such environments, you can usually find a mid-speed clock, perhaps in the vicinity of 100 MHz, distributed to various portions of the motherboard. This clock is then divided and multiplied to suit specific needs; for example, the CPU has its own multiplier, a matter of great interest to overclocking enthusiasts. Of course, there is also a fair number of embedded controllers and external peripherals that operate with their own free-running clocks, talking to the main system via slower, explicitly-clocked serial I/O.
In microcontroller and system-on-a-chip environments, the level of integration is typically much greater. A typical 32-bit MCU will have an internal RC clock source, along with a Pierce oscillator that can be hooked up to an external crystal if desired. A number number of internal PLLs and clock dividers will be available to generate higher frequencies for the CPU core, and lower frequencies for flash memory or for USB:
In embedded systems, MCU clock is seldom distributed to external components; host-generated clock signals might be provided as a part of buses such as SPI or I2C, but they are supplied intermittently, so devices that need to do anything in between the transmissions would need a clock of their own. As a practical example, an SPI DRAM chip such as APS1604M-3SQR will have an internal RC oscillator to handle periodic memory refresh. The same goes for most simple display modules.
Continue to the next MCU-related article: OLED Sokoban fun. To review the entire series of articles on digital and analog electronics, visit this page.
Of course, the marketplace of clock sources is a bit of a rabbit hole on its own.
There are temperature-compensated or temperature-controlled crystals (TXCO and OCXO), the latter reaching parts-per-billion accuracy (but also costing a lot). There are ceramic resonators, a bit cheaper than crystals and a bit less bad than RC circuits, making them poor for timekeeping, but OK for applications such as USB or Ethernet. There are integrated modules that might contain an amplifier and a programmable divider / PLL - maybe a bit less relevant now that the same functionality is commonly available on MCU and CPU dies. And there are micromachined MEMS devices, which generally don't perform any better or worse than quartz, but might be available in slightly smaller packaging.