Clocks in digital circuits
How do electronics keep track of time - from RC oscillators, to quartz crystals, to phase-locked loops.
Clock signals are the backbone of modern computing. Their rising or falling edges synchronize CPU state transitions, assist in shifting bits in and out on data buses, and set the tempo for countless other digital housekeeping tasks.
Operating complex systems off a single clock is usually impractical: a computer mouse doesn’t need to run at the same speed as the CPU. A typical PC relies on dozens of timing signals, ranging from kilohertz to gigahertz, some of which aren’t synchronized to a common reference clock. The clocks originate in different parts of the system and are dynamically scaled for a variety of reasons: to conserve energy, to maintain safe chipset temperatures, or to support different data transmission rates.
In this article, we’ll have a look at how digital clock signals are generated, how they are divided and multiplied, and where they ultimately end up. The write-up assumes some familiarity with concepts such as inductors and op-amps; if you need a quick refresher, start here and here.
Clock sources: RC oscillators
The most common digital clock source is an RC oscillator: a simple device constructed out of a capacitor, a resistor that sets the capacitor’s charge and discharge rate, and a negative feedback loop that flips the charging voltage back and forth. Because all these components can be easily constructed on the die of an integrated circuit, RC oscillators are commonly found inside microcontrollers and CPUs.
A naïve implementation of a square-wave RC oscillator might rely on a single NOT gate as a feedback mechanism:

The idea is simple. Let’s assume that the capacitor is initially discharged, so the gate’s input is at a logic zero. This sends the output voltage to a logic one — and in turn kicks off the process of slowly charging the capacitor via R. Eventually, capacitor voltage reaches the threshold for logic “1”, at which point, the output should flip to zero, starting the inverse process of discharging the cap. Charge, discharge, rinse, repeat?
Well, not exactly: in practice, the circuit will not function correctly with a standard NOT gate. Most of the time, the underlying analog nature of digital circuitry will rear its ugly head, and the circuit will reach a very non-digital equilibrium somewhere around Vcap = Vout = Vdd/2. At that point, any slight increase in Vout puts the input voltage higher, which in turn causes Vout to drop. This is a linear negative feedback loop that prevents oscillation.
Because of the very high gain of the NOT gate, with some nudging (e.g., with inadequate decoupling or a noisy power supply), the setup might eventually start to oscillate, but because only tiny nudges will be needed to move back and forth between logic “0” and “1”, the oscillation will be a chaotic mess:
A linear circuit like this could be fixed by making sure that there is a roughly half-wavelength delay between the output signal and what appears back on the input leg of the inverter. This way, the excursions from Vdd/2 at the desired frequency would start adding constructively, producing larger and more coherent swings over time — a resonance.
In theory, such an oscillator could be constructed by just adding another series resistor and a shunt capacitor in the feedback loop, in effect forming a pair of RC lowpass filters, each of which can contribute a voltage shift of up to one-fourth of a wavelength of a sine wave (90°). That said, the 90° shift is achieved as the frequency approaches infinity, so the oscillation frequency would be very high and poorly-specified. To build a more practical phase-shift oscillator, we’d typically string together a higher number of filters in series. For example, we could use three stages providing 60° shifts each; this shift happens at a well-defined frequency that can be computed with relative ease.
Straightforward nonlinear solutions are also possible. A particularly elegant circuit is the following rail-to-rail op-amp design:
First, let’s have a look at the non-inverting input: with three identical R1 resistors, the voltage on this leg is a simple three-way average of 0 V, the supply voltage (Vdd), and whatever the output of the op-amp happens to be at a given time. In other words, Vin+ can only range from 1/3 Vdd (if Vout = 0 V) to 2/3 Vdd (if Vout = Vdd).
Now, let’s examine the inverting leg. Assume the capacitor is initially discharged, so Vin- = 0 V. Because Vin+ >> Vin-, the output of the op-amp jumps to its maximum output voltage, and the capacitor begins to charge. The situation continues until Vin- reaches Vin+ (which sits at 2/3 Vdd).
At that point, the output voltage of the op-amp decreases — perhaps not all the way to down, but it drops a notch. This instantly pulls the Vin+ leg lower, and thus makes Vin- >> Vin+. This negative feedback loop causes Vout to plunge all the way down. The capacitor now begins to discharge — and will continue to do so for a while, because Vin+ is now sitting much lower, at 1/3 Vdd.
In effect, we have a binary circuit with hysteresis: the transition from “0” to “1” takes place at a much higher voltage than from “1” to “0”. With no stable equilibrium and two distant transition points, the arrangement functions as a good oscillator. The following oscilloscope plot shows the oscillator’s output voltage (yellow), along with the op-amp’s inputs: Vin- (blue) and Vin+ (pink).
A similar kind of memory / hysteresis can be built into CMOS digital logic. Logic gates with this property are described as having “Schmitt trigger” inputs. One example is the 74HC14 Schmitt trigger inverter, which can be used to build a working version of the ill-fated NOT gate oscillator discussed earlier on:
Clock sources: piezoelectric materials
Although RC oscillators are easy to make, they are not particularly accurate: most capacitors are made with an accuracy of 10-20% to begin with, and are then affected by temperature and aging. With some care, accuracy down to 5% can be achieved; this is good enough for some purposes, but it gets in the way of precise timekeeping, and may interfere with some clock-sensitive protocols, such as USB.
The solution is a peculiar electromechanical device: a laser-trimmed piece of piezoelectric material — often quartz crystal — that produces a coherent electrostatic field on its surface when squeezed. Piezoelectric materials are used to generate sparks in some push-button igniters, but more importantly, the effect is symmetric: the application of an electrostatic field can cause the material to contract and then elastically return to the original dimensions once the voltage is removed. This leads to a couple of other niche applications, including ultra-miniature but wimpy motors used in camera lenses, or high-frequency buzzers used in smoke alarms and ultrasonic range sensors.
Piezoelectric crystals also make good resonators. Much like a kids’ swing, if you send pulses that contract the crystal at its natural resonant frequency, it’s easy to get it going and reach a considerable amplitude of motion without applying a whole lot of force. But if you time your pushes wrong, the swing won’t go very far, and you might get your teeth knocked out.
The following video illustrates the moment of hitting the resonant frequency of a quartz crystal. At that point, the crystal’s AC impedance is at its lowest — that is, a given driving voltage (yellow) achieves the maximum current swing (blue):
At that exact point, there’s also little phase difference between the driving voltage and the resulting current measured across the device.
Curiously, the behavior of the crystal around its resonant frequency is asymmetric. Delivering pulses a bit too slow causes a more gradual reduction of amplitude than delivering them slightly too fast. In the latter case, we quickly hit what’s known as anti-resonance: a point where the impedance skyrockets and there’s very little current flowing through the device. The swing analogy helps understand what’s going on: if you push the swing a tiny bit too late (i.e., as it starts moving away from you), you’re not going to transfer energy as efficiently but it’s not a big deal. But if you push it too early — while it’s moving toward you — you’re actually braking it. Do it a couple of times and the swinging will stop.
In addition to changes in impedance, we can also observe shifts in phase between the applied voltage and the resulting current. Most of the time, when the crystal is not oscillating, it’s electrically just a capacitor: two metal plates and some non-conductive layer sandwiched in between. It follows that across most of the frequency rance, there’s a capacitor-like -90° phase shift where the current peaks are one-fourth of a cycle ahead of the voltage peaks.
That said, between the resonant and anti-resonant frequency — in the “braking” region — the relationship is briefly reversed. In effect, for a moment, the crystal behaves a bit like an inductor, even though it isn’t one in any conventional sense:
Perhaps the most logical way to build a crystal-driven oscillator would be to zero in on the region of minimum impedance (the dip on the top plot, marked with the dashed line). Yet, the most common architecture used in digital circuits — the Pierce oscillator — works differently. It zeroes in on the region where the crystal exhibits a specific amount inductor-like current lag. The target phase shift is typically in the vicinity of 90°, somewhere on the upward slope to the right of the dashed line.
The Pierce oscillator isn’t explained clearly on Wikipedia or on any other webpage I know of, but the basic architecture is as follows:
We can start with the inverter: it does just what the name implies. For steady sine waves, inverting the signal is the same as creating a 180° phase shift between the input and output voltages.
For simplicity, the circuit usually isn’t constructed with a well-behaved op-amp. Instead, it uses a basic, high-gain NOT gate, akin to the initial experiment outlined in this article. This explains the resistor on top (Rlin): it’s simply a kludge to achieve semi-reasonable linearity when using the wrong component for the job. The resistor provides a negative feedback mechanism, adding the inverted signal to the input and thus reducing the NOT gate’s gain.
The next portion of the circuit is the series Rser resistor connected to a shunt capacitor (C1). Most simply, this forms an RC lowpass filter. The components are chosen so that the circuit operates well above the lowpass cutoff frequency, thus adding close to 90° voltage phase shift between the output of the inverter and the right terminal of the crystal.
Finally, the crystal acting as an inductor and charging a shunt capacitance C2 is just another (LC) lowpass filter capable of adding a signal shift of about 90°. The key point is that the second filter can work only if the frequency of the signal falls within the narrow inductor-like region of the crystal.
As discussed earlier, a linear inverter with its output looped onto input is a negative feedback loop will not oscillate; the loop provides negative feedback and forces the amplifier to settle at some midpoint. To exhibit stable oscillations, another 180° needs to be added by external components to make this a positive feedback loop. This is what happens with the two series filters. Further, because the crystal’s phase shift gets dramatically out of whack even with a very slight change in frequency, oscillations at any other frequencies are pretty effectively suppressed.
I should note that this explanation isn’t quite right: the RC filter and the LC filter aren’t isolated from each other, so so there are some parasitic interactions between these stages; for an analysis of a similar scenario, you can review this article. That said, as long as the impedance of the second stage is comparable to or higher than the first stage, the simplified model is pretty accurate.
The main reason we use this architecture is that it requires the minimum number of external components: the inverter and the resistors can be placed on the MCU die, so in most cases, you only need two small, picofarad-range capacitors in addition to the crystal itself.
Because Pierce oscillators operate slightly above the crystal’s true (“series”) resonant frequency, some attention must be paid when shopping for components. Most crystals are specified for use with Pierce oscillators, but some may be characterized at their series frequency — and will run a tiny bit fast when connected to an MCU. The difference usually hovers around 100-300 ppm, but this is nothing to sneeze at if you consider that the crystal’s usual accuracy is around 20 ppm.
Crystal oscillators can’t be easily manufactured on a silicon die, so they almost always come as discrete components, set apart by their shiny metal cans. It’s most common to encounter crystals between 1 and 40 MHz, although a peculiar value of 32.768 kHz is also a common sight (more about that soon).
Clock sources: MEMS
In recent years, microelectromechanical (MEMS) oscillators emerged as an alternative to quartz crystals in some applications. Similarly to quartz, they rely on a precision-tuned mechanical oscillator, although the vibrations are induced in a different way.
The devices don’t offer any significant advantages or disadvantages compared to quartz, except for making it easier to put active control circuitry on the same die as the resonant structure itself. This saves a tiny bit of PCB space and allows internal adjustments to output frequency (see below), so MEMS clock sources tend to crop up in high-density applications such as smartphones or smart watches.
Clock division
Modern computers operate at gigahertz frequencies, but such clock speeds are reserved only for a handful of system components, and only during peak demand. Gigahertz signals pose significant design challenges for larger circuits; plus, there’s no conceivable need for an audio chipset or a fan controller to run nearly that fast.
For this reason, it’s common for clock signals in digital circuits to be divided (“prescaled”) for specific subsystems. The simplest and oldest way of doing this is to clobber together a handful of logic gates and build a binary counter. This allows the signal to be divided by any power of two:
This brings us back to the curious case of the 32.768 kHz oscillator: it originated in the era of digital watches, where its signal could be divided using a binary counter to get one second ticks (32768 / 215 = 1). Today, thanks to higher transistor densities and better power efficiency, arbitrary divisions can be accomplished with countdown circuits that start from a preprogrammed value and then reload the register after reaching zero. That said, the 32.768 kHz clock still crops up all over the place, a darling of circuit designers and software engineers alike.
Clock multiplication
Clock division is not a cure-all; for one, it’s pretty hard to make traditional crystal oscillators with resonant frequencies above around 60 MHz. Although some shenanigans with harmonics can be pulled off, it follows that if a modern CPU needs to run off a stable clock while attaining gigahertz speeds, we might need to employ some clever trickery.
This trickery is accomplished with a circuit known as the phase-locked loop (PLL). A basic practical PLL design might look as follows:
The first stage is a phase detector, usually implemented as a couple of logic gates that test whether the feedback signal is leading or trailing the edges of the reference clock. Depending on the outcome of this comparison, the detector either slightly boosts or slightly reduces the charge of a capacitor in the charge pump. The third component is a voltage-controlled oscillator (VCO), which produces an output frequency in direct relation to the voltage presented on its input leg.
Despite their name, the bulk of what PLLs do is matching frequencies; phase matching is just a secondary perk. Consider that if the signals start in-phase but the VCO is running too slow, its edges will start trailing the reference clock, and the error detector will boost the voltage of the capacitor, speeding things up. In the opposite situation, the VCO will be leading the clock, and the detector will politely instruct the oscillator to calm down. If there’s a huge difference in phases or frequencies, the PLL might need a bit of time to stabilize, but it eventually does so.
Of course, a PLL constructed this way is not useful for clock multiplication; for that, the circuit requires a rather ingenious tweak:
The modification may seem counterintuitive, but it makes perfect sense. Let’s say the divider slashes the frequency of the feedback signal by a factor of two. Now, the phase error detector needs to push the VCO to twice the input clock frequency obtain phase lock. Neat, huh?
System-wide clock architectures
In the classical PC architecture, the processor’s job is to handle computation; most other tasks — from storing data, to drawing graphics on the screen, to handling I/O ports — are delegated to other chips. Although there is a trend toward greater integration, in such environments, you can usually find a mid-speed clock, perhaps in the vicinity of 100 MHz, distributed to various portions of the motherboard. This clock is then divided and multiplied to suit specific needs; for example, the CPU has its own multiplier, a matter of great interest to overclocking enthusiasts. Of course, there is also a fair number of embedded controllers and external peripherals that operate with their own free-running clocks, talking to the main system via slower, explicitly-clocked serial I/O.
In microcontroller and system-on-a-chip environments, the level of integration is typically much greater. A typical 32-bit MCU will have an internal RC clock source, along with a Pierce oscillator that can be hooked up to an external crystal if desired. A number number of internal PLLs and clock dividers will be available to generate higher frequencies for the CPU core, and lower frequencies for flash memory or for USB:

In embedded systems, MCU clock is seldom distributed to external components; host-generated clock signals might be provided as a part of buses such as SPI or I2C, but they are supplied intermittently, so devices that need to do anything in between the transmissions would need a clock of their own. As a practical example, an SPI DRAM chip such as APS1604M-3SQR will have an internal RC oscillator to handle periodic memory refresh. The same goes for many ADCs, display modules, and so on.
👉 Continue to the next MCU-related project: OLED Sokoban fun. To review the entire series of articles on digital and analog electronics, visit this page.
I write well-researched, original articles about geek culture, electronic circuit design, algorithms, and more. This day and age, it’s increasingly difficult to reach willing readers via social media and search. If you like the content, please subscribe!
Of course, the marketplace of clock sources is a bit of a rabbit hole on its own.
In addition to standard xtals, there are temperature-compensated or temperature-controlled variants (TXCO and OCXO), the latter reaching parts-per-billion accuracy (but also costing a lot). Then, there are ceramic resonators, made from the same material as piezoelectric transducers; this is cheaper than quartz and less bad than RC circuits - good enough for applications such as USB or Ethernet. Finally, there are integrated modules that might contain an amplifier and a programmable divider / PLL.