Primer: core concepts in electronic circuits
Back to the basics: defining key concepts in electronics without breaking out a plumbing wrench.
I didn’t kick off this site with a specific set of topics in mind, but for one reason or another, electronics emerged as an audience favorite. At the same time, because we didn’t follow any specific curriculum, I think I might have left out an important stepping stone for some of the less seasoned hobbyists: a brief discussion of key concepts used to describe what’s going on in a circuit in the first place.
Today, I’d like to close this gap with a couple of crisp definitions that stay clear of flawed hydraulic analogies, but also don’t get bogged down by differential equations or complex number algebra. The article is meant for readers who have a general sense of how circuits work — but don't always know why a resistor or a capacitor is needed in a particular place.
If you need a primer on the nature of electricity, start here first.
Current (I)
Current is the flow of electric charges through a point in the circuit. Its unit — the ampere (A) — is defined as the travel of about 6.24 × 1018 elementary charges per second through a particular spot. That exact number, known as one coulomb, isn’t worth memorizing; the value is chosen to neatly relate the ampere to other SI units of measure, but it’s not used in any day-to-day circuit analysis tasks.
The “elementary charge” in this definition is typically the electron. Resist the urge to view the motion of electrons similarly to the movement of solids or liquids through a pipe. For one, the process doesn't involve particles bumping into each other to transmit kinetic energy; instead, the drift of electrons is mediated at a distance through electromagnetic fields that propagate through free space. The mobile electrons themselves are confined to the conductor, but the fields are not.
Resistance (R)
Resistance is the opposition to the flow of steady current. Although this can happen in a couple of ways, the most prosaic mechanism is that all common materials impede the movement of electrons in a way that resembles friction. The wasted energy excites the medium and is then dissipated as heat.
The unit of resistance is the ohm (Ω). It can be defined as the resistance of a conductor that, when subjected to a current of one ampere, produces one watt of heat.
A resistor is a simple component that exhibits well-defined (and often high) resistance. It is commonly made from carbon powder mixed with binder, or from thin metal film.
Voltage (V)
Voltage is the measure of electromotive force between two points in a circuit. It can be thought of as a pressure difference in the electron gas. In most circumstances, it’s what would cause a current to flow if you dropped a metal wrench across.
The unit of voltage — the volt (V) — corresponds to the electromotive force needed to induce a current of one ampere through a resistance of one ohm. The force can come from electrochemical reactions inside a battery, an electrostatic field in a capacitor, and so on. In all cases, voltage is expressed as a delta between two points; if one of the measurement endpoints is not specified, the reference is usually the circuit’s negative supply rail.
From the definitions outlined so far, there is a linear relationship between the voltage applied to a section of a circuit, its apparent resistance, and the current flowing through. In other words, if you know two of the values, you can trivially calculate the third. For example, you can figure out the supply voltage from the observed current (I) through a known resistance (R):
Similarly, for a known supply voltage and resistance, the resulting current is given as:
Finally, if you can measure the applied voltage and the resulting current, the resistance of an unknown component is:
This equation, in any of its three forms, is known as Ohm's law.
Capacitance (C)
Capacitance measures the ability of a component to store electric energy, most commonly in an internal electric field. Capacitors are specialized components that allow a considerable field to build up between two closely-spaced but electrically insulated plates. The components don’t conduct steady currents, but a brief charging current can flow with the application of an external electromotive force.
A capacitor that allows a charge of one coulomb to accumulate on its plates when it’s subjected to one volt is said to have a capacitance of one farad (F). The farad is a big unit; commonly-encountered capacitors range from picofarads (pF) to microfarads (µF).
Earlier on, we said that the flow of one coulomb per second is defined as one ampere. So, here’s another way to look at it: a 1 F capacitor, when supplied with a constant 1 A current for one second, will be charged to 1 V. The component will reach a a higher voltage if the current is higher or the charging period is longer; on the flip side, the voltage will rise more slowly if the capacitance is higher. In an ideal capacitor, there's a simple linear relationship that can be written down as:
Equivalently, we can rearrange the equation to determine the charging current if we know the elapsed charging time and the resulting voltage across the terminals:
What’s true for the overall charging process is also true for shorter time slices; if the voltage applied to the terminals of a capacitor changed by Δv in some time period Δt, the same math can be used to determine the resulting momentary charging current (i) that must’ve flowed during that time slice:
To understand how the device behaves in practice, we can build a toy discrete-time model of a capacitor. Let's assume C = 100 µF and see what happens when we provide a 50 Hz sine wave signal across the capacitor’s terminals.
Now, when we talk about “signals” in electronic circuits, we almost always mean a well-controlled voltage that varies over time. This simulation will be no exception: my code approximates a sine-wave input voltage, calculated with a resolution of Δt = 1 ms. The program takes this input signal, computes Δv in relation to the previous time slice, and then calculates the momentary capacitor charging current (i) as per the formula above:
The calculated current appears to be a sine wave too, although it’s shifted in phase: the value the highest when the voltage is changing the fastest, which happens to be around 0 V in the input signal. Conversely, the lowest current corresponds to the peaks of the voltage waveform.
In the above simulation, the current swings about 150 mA. Next, let's re-run the model, lowering the input frequency to 10 Hz:
In this scenario, the current peaks around 30 mA — a five-fold reduction compared to what we observed at 50 Hz. This makes sense: the rate of change for the input signal (Δv/Δt) increases linearly with frequency; from the earlier formula (i = CΔv/Δt), the current ought to follow suit. We can observe the same proportional current change if we keep the frequency constant but increase capacitance (C).
In other words, capacitive elements impede the flow of currents in a frequency-dependent way. As noted earlier, a capacitor placed in series with a DC signal essentially blocks it out; it allows more to pass through as the frequency picks up. The magnitude of this effect — known as capacitive reactance — is described for a given sine wave frequency f by the following formula:
For the first simulation we performed (f = 50 Hz), the value works out to ~30 Ω; in the second one (f = 10 Hz), it jumps to ~160 Ω; in other words, these theoretical calculations track with the experiment.
⚠️ What follows is the rough derivation of the XC formula. It’s not difficult, but you can skip ahead to the next bolded paragraph if you’re in a hurry or if you’re allergic to math.
If you’re curious how to arrive at the formula in a more principled way, let’s start with the 2π part next to f; together, this is called “angular frequency” and sometimes shortened to ω. It has a pretty simple explanation: to model the response of a capacitor to a sine wave signal, we first need to model that sine signal itself. If we have a timing variable t measured in seconds, it would be tempting to simply write the following time-domain formula:
But ignoring units, in standard mathematical notation, the period of this expression is not going to be 1 second; instead, it’ll take around 6.28 seconds for the waveform to repeat.
To explain why, it’s useful to look at the high school definition of sine: it’s the ratio of the opposite (the vertical side) to the hypotenuse (the long side) of a right triangle for a chosen angle.
For simplicity, we can decide to always draw the hypotenuse with a fixed length of 1. If so, the value of the sine function becomes equal to the length of the vertical edge of the triangle (i.e., its height) — division by 1 doesn’t do anything. Further, in this setting, the far end of the hypotenuse will draw a circle with a radius of 1 as the angle progresses from 0 to 360°:
The prevailing convention in college-level math and in computer programming is to express the parameter of a sine function not as degrees, but as the distance traveled by point A along the circumference of this unit circle. That is to say, we write 360° — the function’s full period — as 2π (the formula for circumference is 2πr and r = 1). This is known as radians.
The convention is useful in some contexts but confusing here, so to fix the glitch and get a 1 Hz signal, we need a scaling factor in the time-domain equation; the complete formula to generate a sine wave of a desired frequency f is:
In itself, that still doesn’t explain much; how come the 2πf part “survives” in the XC formula, but the sine function is nowhere to be seen?
Notionally, the answer to that calls for black-box calculus, but we can get to a reasonable place if we make two common-sense observations. First, we can reassert that the slope of the sine function is the highest when its value crosses zero:
This is plain to see, but a slightly more rigorous explanation comes from the aforementioned fact that the value of sin(t) can be modeled as the y coordinate of a point traveling along a unit circle at a constant angular speed. When the point is crossing the horizontal axis, the path along the circle is essentially vertical — and thus the rate of change of the function value (“y-velocity”) is at its peak:
In fact, that rate of change is described by a related trigonometric function, the cosine. The cosine looks the same as sine, except it peaks where the sine crosses zero (and vice versa); it’s what we observed in the discrete-time simulation. This relationship has a pretty nice visual proof involving triangles. Consider the case of a vanishingly small but non-zero change in the parameter of the sin(t) function — i.e., Δt so tiny that the corresponding section of the circle is indistinguishable from a straight line:
In any case, back to the topic at hand! The other observation is that the slope of the y = sin(x) curve at the zero crossings is +/- 45°. This is a consequence of the same property: at these exact moments, the distance traveled along the diameter of the circle is equal to the distance traveled in y. In the same (very) local view as used for the cosine proof, the sine function is briefly behaving just like y = +/- x.
The first assertion — about velocity — means that the zero-crossing points of the voltage waveform necessarily correspond to the peak current through the capacitor. One such crossing happens at t = 0, so find the maximum current, we can look at the time slice between t = 0 and t = 0 + Δt. Again, Δt is just a stand-in for a “tiny but non-zero time interval”:
In the equation, Vpeak is the maximum swing of the waveform we’re simulating; the value doesn’t matter, as the term will cancel out down the line, but I’m including it for completeness.
Now, let’s go back to the second assertion: that the slope at zero-crossing is exactly +/- 45°. Again, this means that in the immediate vicinity of the crossing point, y = sin(x) behaves the same as y = +/- x, “copying” its input to output. The slope changes as soon as you get any substantial distance away — but if Δt is infinitesimal, we can just pretend the sine isn’t there:
Looks like we’re getting somewhere! The final step is to go back to the earlier formula calculating resistance from the applied voltage and the resulting current flowing through a component. For well-behaved sine (or cosine) waveforms, we can extend the same rule to calculate the resistance-like effect from the calculated peak current (and known peak voltage):
⚠️ If you were trying to skip the math, resume here.
Like resistance, capacitive reactance is measured in ohms, and has a superficially similar effect on alternating, sine-wave currents and voltages. That said, it is a distinct physical phenomenon that involves the storage of energy, rather than its loss.
Another important distinction is that in an ideal resistor, the current through the device is always in lockstep with the applied voltage. In a capacitor, as noted earlier, the the current is at its peak when Δv is the highest; for sine waves, this happens when the input voltage is near 0 V:
The result is that the current appears to be one-fourth of a cycle "ahead" of the applied sine-wave voltage. This phase offset is important when considering the cumulative effect of resistance and reactance in a circuit; although the units are identical, a naive sum will usually not do. Again, we'll cover the rules for commingling these quantities in a second article.
Inductance (L)
Inductance measures the ability for a component to resist the change in the current flowing through it. It’s normally associated with the creation of magnetic fields, especially in ferromagnetic materials. The fields soak up energy when the current is ramping up, and then keep pushing electrons for a while when any external electromotive force disappears.
The unit of inductance — one henry (H) — corresponds the behavior of a device that, when subjected to a current rising at a rate of 1 A per second, opposes this (rather leisurely!) change by developing 1 volt across its terminals. The effect is symmetrical, so if the current plunges, the voltage dips negative as the inductor is trying to sustain the flow.
As with the farad, the henry is a big unit; common inductors range from nanohenrys (nH) to millihenrys (mH).
Similarly to capacitors, inductors impede the flow of alternating currents in a frequency-specific way. A series capacitor blocks DC and attenuates low frequencies; a series inductor does the opposite: it attenuates fast-changing signals while letting steady currents through.
For inductors, the magnitude of this effect is quantified by the following formula:
In essence, it says that an inductor opposes alternating currents proportionally to their sine frequency and the component’s own rated value. The underlying math is equivalent to what we did for capacitors.
As before, this reactance is measured in ohms, but it’s not exactly the same as resistance. Similarly to to a capacitor, an inductor causes voltages and currents across the device to get out of phase — although in this instance, the voltage rises first, and the current catches up 1/4th of a cycle later.
Impedance (Z)
“Impedance” is a common shorthand for the opposition to the flow of current that arises from the combination of resistance, capacitive reactance, and inductive reactance. Once again, because the effects are not necessarily aligned in phase, the overall impedance is not a simple sum of all three. That said, most of the time, the term is used a stand-in for a scalar value representing the dominant of the three quantities.
Perhaps confusingly, the same term is also sometimes used to loosely categorize signal sources and loads. A low-impedance source is one that can deliver substantial currents. Conversely, a high-impedance one can deliver very little juice before the signal ends up getting distorted in some way. In the same vein, a low-impedance load is power-hungry, and high-impedance one is not.
The final abuse of the term is the concept of characteristic impedance, as discussed in the earlier article on signal reflections. It is relevant only when dealing with signal lines that are long in proportion to signal wavelength (i.e., not often, unless you're working on an amateur radio rig or dealing with gigahertz-range signal frequencies). For well-behaved conductors, this parameter has the following relation to the conductor’s measured series inductance and parallel capacitance:
Again, I'm including this for completeness, but characteristic impedance is not something to think about day-to-day.
Of course, one could include a host of other formulas and laws on this article. That said, they all build on the same common foundations — and with the right mental model in place, the rest should be relatively easy to grasp.
👉 For a continuation of the discussion of impedance, see this article. For a catalog of other articles on electronics, click here.
I write well-researched, original articles about geek culture, electronic circuit design, and more. If you like the content, please subscribe. It’s increasingly difficult to stay in touch with readers via social media; my typical post on X is shown to less than 5% of my followers and gets a ~0.2% clickthrough rate.
Long time IT guy, but my in-the-weeds electronics knowledge is sorely lacking! Loving both the introductory articles and the more advanced stuff, even when I'm struggling to understand it 😏 Passing along your links and compliments over at BlueSky as well.
Interesting read.