Primer: core concepts in electronic circuits
Back to the basics: defining key concepts in electronics without breaking out a plumbing wrench.
I didn’t kick off this site with a specific set of topics in mind, but for one reason or another, electronics emerged as an audience favorite. At the same time, because we didn’t follow any specific curriculum, I left out an important stepping stone for some of the less seasoned hobbyists: a discussion of key concepts used to describe what’s going on in a circuit in the first place.
Today, I’d like to close this gap with a couple of crisp definitions. We'll start with time-invariant characteristics: current, voltage, and resistance. From there, we'll proceed to capacitance, inductance, and sine-wave reactance. We’ll intuitively derive the formulas without leaning on prior knowledge of calculus.
The article is meant for readers who have a general sense of how circuits work — but don't always know why a resistor or a capacitor is needed in a particular place. If you need a more fundamental explanation of what electricity is, start here first.
Current (I)
Current is the measure of the flow of electricity through a point in the circuit. Its unit — the ampere (A) — is defined as the travel of about 6.24 × 1018 elementary charges per second through a particular spot. This underlying charge count is known one coulomb (unit symbol: C).
To make it formal, we can write the following formula for current in relation to the amount of charge (Q) in coulombs that passes through a wire in a given time window (t):
The “elementary charge” that’s moving in electronic circuits is typically the electron. If you’re iffy on the nature of electricity and the physics of conduction, it might be helpful to pause and review this article first.
As for the number of elementary charges that makes up one coulomb, the number isn’t worth memorizing; the value is basically a “fudge factor” in the International System of Units that lets us neatly relate the units we use in electronics to other SI measures.
Voltage (V)
Voltage is the measure of the electromotive force. Colloquially speaking, it’s the maximum amount of work that some external source (e.g., a battery) is willing to put into shuffling electrons across its terminals. Alternatively, it’s the “electron pressure difference” that would cause a current to flow if you dropped a metal wrench across two points in the circuit.
In all cases, voltage is always measured between two spots; if one of the measurement endpoints is not specified, the reference is usually the circuit’s negative supply rail.
To define the force more precisely, we can say that the unit of voltage — the volt (V) — measures the amount of kinetic energy that would be imparted per each electron pushed onto its merry way. This “per electron” scaling is important: voltage doesn’t measure the overall amount of energy stored in a device such as a battery. Instead, it describes how much of a push the chemical reaction inside can give to charge carriers as they're sent on their way.
To make the unit a bit more practical, we usually don’t do the math for single electrons; instead, we define the volt in terms of released energy (E) measured in joules, divided by the amount of moved charge in coulombs (Q):
The joule is not an everyday unit, so for reference, there are about 4,200 joules in a “food calorie” (kcal), while a gallon of gas delivers roughly 132 million joules. More properly, one joule (unit symbol: J) is the energy expended every second when generating one watt of power. Watts (unit symbol: W) should a familiar to anyone. It’s the rate at which work is performed or energy is dissipated. An idling laptop has a heat output around 20-50 W, while a typical electric space heater puts out about 1,500 W.
Most of the time, it’s not particularly important to know how to convert between volts and joules or watts. What we discussed above is the formal definition — but in many cases, the volt can be thought as a “base” unit of electromotive force, roughly a measure of “how big a spark would this cause”. We’ll discuss two more useful if less formal ways of thinking about volts a bit later in the article.
Footnote: power formula
About the only common use for the volt-to-watt conversion is that we sometimes want to calculate the amount of energy consumed by an electronic circuit in proportion to the applied voltage and the current flowing through. To do this, we simply need to rearrange and then combine the two formulas introduced earlier on in this article:
This lets us calculate the number of joules dissipated in a given time window (t). Next, recall that the unit of power — one watt — is equal to one joule per second. If we plug the energy formula shown above into the power equation, the t variable cancels out, so power is just proportional to V times I:
Resistance (R)
Resistance is the time-invariant opposition to the flow of steady current. All common materials impede the movement of electrons in a way that vaguely resembles friction or aerodynamic drag. The wasted energy is then radiated away as heat.
Most of the time, reasonably-sized copper conductors have resistance low enough to be negligible. When we want to impede currents in a well-controlled way, we use resistors. Resistors are simple electronic components with fine-tuned resistance; they are usually made from a mixture of carbon powder and binder or from thin metal film.
The unit of resistance is the ohm (Ω). It can be defined as the behavior of a resistor that, when subjected to an electromotive force of one volt, allows exactly one ampere of current to flow through. The relationship between resistance, voltage, and current is governed by the following formula:
In most materials, resistance stays constant; that said, it is possible to build devices where it varies depending on factors such as temperature, incident light, or the applied voltage.
The equation can be rearranged in a couple of useful ways. For example, you can figure out the supply voltage from the observed current (I) through a known resistance (R):
In fact, this is another useful (if somewhat circular) way to think about voltage: it’s the potential difference you can measure when you send one amp through a resistance of 1 Ω.
Similarly, for a known supply voltage and resistance, the resulting current is given as:
These three equations, in any form, are known as Ohm's law.
Footnote: power dissipation in a resistor
The formula for power (P = IV) can be combined with Ohm’s law to calculate the amount of heat dissipated in a resistor that’s subjected to specific voltage:
This is known as Joule’s law, and it gives us a third informal definition of one volt: it's the electromotive force that, if applied to a 1 Ω resistor, would dissipate one watt of heat.
The solution for a specific current is analogous: P = I²R. Note that in both cases, we end up with the voltage or current squared. This is significant: it means that a relatively small increase of V or I can produce a much higher increase in heat dissipation and possibly set some components on fire.
Capacitance (C)
Capacitance measures the ability to store potential energy, most commonly in the capacitor’s internal electrostatic field.
The process of charging a capacitor involves the application of an external electromotive force that shuffles electrons between two conductive plates separated by a thin insulating film. This makes one of the plates positively charged and the other one negatively charged. Due to the proximity of the plates, the resulting electrostatic field is contained largely to the gap and partly cancels out from other vantage points. Because of this, the “pushback” force is diminished and a fairly substantial amount of charge carriers can be moved by a relatively modest voltage.
A capacitor that allows a charge of one coulomb to accumulate on its plates when it’s subjected to one volt is said to have a capacitance of one farad (F):
The farad is a big unit; commonly-encountered capacitors range from picofarads (pF) to microfarads (µF).
Earlier on, we said that the flow of one coulomb per second is defined as one ampere. So, here’s another way to look at capacitors: a 1 F capacitor, when supplied with a constant 1 A current for one second, will develop 1 V across its terminals. There's a simple linear relationship that can be written down as:
Equivalently, we can rearrange the equation to determine the charging current if we know the elapsed time and the final voltage:
This might seem like a useless equation: it tells us nothing about the peak current or how the current might have ramped up or down over time. It just gives us an average current that would deliver the correct amount of charge over the time t.
But what’s true for the overall charging process is also true for shorter time slices; if the voltage applied to the terminals of a capacitor changed by Δv in some time period Δt, the same math can be used to determine the resulting momentary charging current (i) that must’ve flowed during that time slice:
This gives us all we need to build a toy discrete-time model of a capacitor. Let's assume C = 100 µF and see what happens when we provide a 50 Hz sine wave signal across the capacitor’s terminals.
Now, when we talk about “signals” in electronic circuits, we almost always mean a well-controlled voltage that varies over time. This simulation will be no exception: my code approximates a sine-wave input voltage, calculated with a resolution of Δt = 1 ms. The program takes this input signal, computes Δv in relation to the previous time slice, and then calculates the momentary capacitor charging current (i) as per the formula above:
The calculated current appears to be a sine wave too, although it’s shifted in phase: the value the highest when the voltage is changing the fastest, which happens to be around 0 V in the input signal. Conversely, the lowest current corresponds to the peaks of the voltage waveform.
In the above simulation, the current swings about 150 mA. Next, let's re-run the model, lowering the input frequency to 10 Hz:
In this scenario, the current peaks around 30 mA — a five-fold reduction compared to what we observed at 50 Hz. This makes sense: for a nice and smooth sine wave, the rate of change for the input signal (Δv/Δt) increases linearly with frequency; from the earlier formula (i = CΔv/Δt), the current ought to follow suit. We can observe the same proportional current change if we keep the frequency constant but increase capacitance (C).
In other words, capacitive elements impede the flow of currents in a frequency-dependent way. As noted earlier, a capacitor placed in series with a DC signal essentially blocks it out; it allows more to pass through as the frequency picks up.
Capacitive reactance (XC)
The relationship between the amplitude of the applied sine signal and the resulting capacitor current is known as capacitive reactance. Fundamentally, it is similar to resistance in that it represents the ratio between peak voltage and peak current: XC = Vpeak / Ipeak. That said, unlike resistance, we’ve shown that the peaks of current and voltage are not in phase. Further, per the earlier discussion, the value of XC isn’t an inherent property of the component; it depends on capacitance, but also on the frequency of the applied sine waveform.
In most introductory textbooks, you’re given the following prepackaged formula for capacitive reactance:
The formula matches our experiments. For the first simulation we performed (f = 50 Hz), the value works out to ~30 Ω; in the second one (f = 10 Hz), it jumps to ~160 Ω; in other words, these theoretical calculations track with the observations. That said, it’s worth pausing for a moment and figuring out where this funky equation comes from.
⚠️ What follows is an accessible derivation of the formula. The genesis of the equation is interesting and helps build intuition; that said, if you’re not interested, you can skip ahead to the next bolded paragraph.
Fundamentally, the XC formula seems plausible, but one of the more inexplicable aspects of it is the 2π part next to f. So, let’s figure out where that comes from.
Step I: the making of the sine wave. To reason about a capacitor subjected to a sine wave, it helps to have a mathematical model of the underlying waveform. If we have a timing variable t measured in seconds, it would be tempting to write the following:
That said, in standard mathematical notation, the period of this expression is not going to be 1 second; instead, it’ll take around 6.28 seconds for the waveform to repeat.
To explain why, it’s useful to look at the common high school definition of sine: it’s the ratio of the opposite (the vertical side) to the hypotenuse (the long side) of a right triangle for a chosen angle.
To arrive at a definition with fewer moving parts, we can also draw the hypotenuse with a fixed length of 1. If so, the value of the sine function becomes equal to the length of the vertical edge of the triangle (i.e., its height) — division by 1 doesn’t do anything. Further, in this setting, the far end of the hypotenuse will draw a circle with a radius of 1 as the angle progresses from 0 to 360°:
The prevailing convention in college-level math and in computer programming to express the parameter of a sine function not in degrees, but in terms of the distance traveled by point A along the circumference of this unit circle (marked in purple in the plot). That is to say, we write 360° — the function’s full period — as 2π (the formula for circumference is 2πr and r = 1). This representation of angles is known as radians.
The convention is useful in some contexts, but messes up simple formulas like that. To fix the glitch and get a 1 Hz signal out of our time-domain function, we need a scaling factor in the equation to speed up time; the complete formula to generate a sine wave of a desired frequency f is:
In itself, this doesn’t actually explain much; it sort of hints why the 2πf part appears in the equation, but how come it persists if the actual sine function is nowhere to be seen? Well, that’s where it gets interesting!
Step II: general observations about sines. To answer this question, let’s stat by reasserting that the slope of the sine function is the highest when its value crosses zero:
This is plain to see, but a slightly more rigorous explanation comes from the fact that the value of sin(t) can be modeled as the y coordinate of a point traveling along a unit circle at a constant angular speed. When the point is crossing the horizontal axis, the path along the circle is essentially vertical — and thus the rate of change of the function value (“y-velocity”) is at its peak:
In fact, at this very moment, it appears that the y-increment is the same as the angle increment — i.e., the value of the function changes at the same rate as the input parameter.
This is still a bit hand-wavy; a more precise (and very useful!) answer is that the rate of change is described by a related trigonometric function, the cosine. The cosine looks the same as sine, except it peaks where the sine crosses zero (and vice versa); it’s what we observed in the discrete-time simulation.
This surprising relationship has a neat visual proof involving triangles. Consider the case of a vanishingly small change in the parameter of the sin(t) function — i.e., Δt so tiny that the corresponding section of the circle is indistinguishable from a straight line:
We won’t be introducing all the formalisms of mathematical analysis, but there are two roughly equivalent ways to think about this view: as a dynamic process where the time slice tends to zero; or as a static situation where the increment is an infinitely small (infinitesimal) quantity.
Either way, as should be clear from the illustration, in this microscopic view, the momentary rate of change for the value of the sine function — that is, the length of the P segment representing the increment of the function value, divided by the length of the Q segment representing the change in its parameter along the unit circle — is the same as the cosine of the timing variable (t).
This gives us what we need to figure out where to look for the peaks…
Step III: the first sketch of peak current. To calculate reactance, we’re trying to find peak waveform voltage and divide it by peak current. Let’s assume the peak voltage (Vpeak) is known and let’s derive Ipeak from that.
Because the charging current is proportional to the rate of change in voltage, the earlier assertion the rate of change being the highest at zero crossings is quite useful. One such crossing happens at t = 0, so find the maximum current, we can just look at the time slice between t = 0 and t = 0 + Δt. Again, Δt is just a stand-in for a “vanishingly small time interval”:
Because the sin(…) function runs from -1 to 1, I introduced the Vpeak multiplier to represent the maximum swing of the actual waveform.
We have the first substantive result, but the formula for Ipeak doesn’t look great; it has some sort of a complex dependency on Δt. Loosely speaking, it seems stuck in the realm of infinitesimal time steps.
Step IV: getting rid of the sine. To fix this, we need to circle back to the second assertion about sine waves: that the slope of the y = sin(x) curve at the zero crossings is exactly +/- 45°. We can rely on the intuitive visualization discussed earlier on, or we can lean on the proof that the rate of change of sin(x) is cos(x). Since cos(0) = 1, this implies that that in that very moment, P (the increment in function parameter) = Q (the increment in function value).
This finding means that in the immediate vicinity of the crossing point, y = sin(x) behaves the same as y = +/- x. The function just “copies” its input to output. The slope changes as soon as you get any substantial distance away from t = 0 — but if we’re solving for that exact point, and if Δt is vanishingly small, we can just pretend the sine isn’t there:
Neat! At this point, we have everything we need to calculate reactance as the ratio of Vpeak to Ipeak, analogously to how we calculate resistance from Ohm’s law:
⚠️ If you were trying to skip the math, resume here.
Again — like resistance, capacitive reactance is measured in ohms, and has a superficially similar effect on alternating, sine-wave currents and voltages. That said, it is a distinct physical phenomenon.
In an ideal resistor, the current through the device is always in lockstep with the applied voltage. In a capacitor, as noted earlier, the the current is at its peak when Δv is the highest; for sine waves, this happens when the input voltage is near 0 V:
In effect, XC calculates the ratio of two peak values, but these peaks are never reached at the same time: for a repetitive sine waveform, the current peak always comes one-fourth of a cycle “ahead” of the voltage. This is important when considering the cumulative effect of resistance and reactance in a circuit; although the units are identical, a naive sum will usually not do. Again, we'll cover the rules for commingling these quantities in a second article.
Inductance (L)
Inductance (L) measures the ability of an inductor to store “kinetic” energy as a consequence of the current flowing through. The energy is stored in an internal magnetic field. An inductor exhibits an inertia-like effect that opposes changes to current; the behavior is conceptually similar to the effort needed to accelerate or slow down a moving object. Higher impedance is akin to dealing with an object of greater mass.
Inductors are usually made out of a length of wire wrapped around a ferromagnetic core. The flow of current is inseparably linked to a magnetic field. In ferromagnetic materials, this field couples to microscopic, randomly-oriented magnetic domains. The coupling prevents the field from ramping up or down until some energy is expended to change the alignment of a number of magnetic domains. This manifests as a “pushback” electromotive force proportional to how quickly the current is changing (Δi/Δt) and to the component’s rated inductance (L):
The unit of inductance — one henry (H) — corresponds the behavior of a device that, when subjected to a current rising at a rate of 1 A per second, opposes this (rather leisurely!) change by developing 1 volt across its terminals. Again, the Newtonian analogue is the pushback force you feel when attempting to accelerate a given mass.
As with the farad, the henry is a big unit; common inductors range from nanohenrys (nH) to millihenrys (mH).
To develop a better understanding of how an inductor responds to a voltage-based signals, we can reorder the formula to solve for the increment of current (Δi):
In other words, in a given unit of time, the current increases in proportion to the applied voltage and inversely proportionally to the component’s rated value.
Similarly to capacitors, inductors impede the flow of alternating currents in a frequency-specific way. A series capacitor blocks DC and attenuates low frequencies; a series inductor does the opposite: it attenuates fast-changing signals while letting steady currents through.
Inductive reactance (XL)
The effect of inductors on sine waves is described using a parameter similar to resistance and capacitive reactance, representing the ratio of voltage to the current passed through.
For inductors, the magnitude of this effect is quantified by the following formula:
In essence, this says that the amplitude of currents through the inductor is reduced proportionally to sine frequency and the component’s own rated value. The equation can be derived in a manner fairly similar to what we did for capacitors.
As before, this reactance is measured in ohms, but it’s not exactly the same as resistance. Similarly to to a capacitor, an inductor causes voltages and currents across the device to get out of phase — although in this instance, the voltage rises first, and the current catches up 1/4th of a cycle later.
Impedance (Z)
“Impedance” is a common shorthand for the opposition to the flow of current that arises from the combination of resistance, capacitive reactance, and inductive reactance. Once again, because the effects are not necessarily aligned in phase, the overall impedance is not a simple sum of all three. That said, most of the time, the term is used a stand-in for a scalar value representing the dominant of the three quantities.
Perhaps confusingly, the same term is also sometimes used to loosely categorize signal sources and loads. A low-impedance source is one that can deliver substantial currents. Conversely, a high-impedance one can deliver very little juice before the signal ends up getting distorted in some way. In the same vein, a low-impedance load is power-hungry, and high-impedance one is not.
Note: there’s also another abuse of the term, known as “characteristic impedance”. This is discussed in the earlier article on signal reflections and is relevant only when dealing with signal lines that are long in proportion to signal wavelength (i.e., not often, unless you're working on an amateur radio rig or on GHz-range signals).
👉 For a continuation of the discussion of impedance, see this article. For additional notes about real-world capacitors and inductors, see here. For a catalog of other articles on electronics, click on this link.
I write well-researched, original articles about geek culture, electronic circuit design, and more. If you like the content, please subscribe. It’s increasingly difficult to stay in touch with readers via social media; my typical post on X is shown to less than 5% of my followers and gets a ~0.2% clickthrough rate.
It is probably worth noting that in the electronic parlance, the concepts of resistance may be sometimes used to refer to devices that dissipate energy in ways other than heat; from the circuit's perspective, all that matters is that the flow of current is impeded and that the phase shift is 0°.
In the same vein, reactance doesn't need to be a product of capacitors and inductors. Again, from a designer's point of view, the physics are unimportant; all that matters is that sine signals are impeded with a phase shift of +90° or -90°. In particular, there are some electromechanical devices that electronically, look like inductors or capacitors, even though the energy is not stored in electromagnetic fields.
Long time IT guy, but my in-the-weeds electronics knowledge is sorely lacking! Loving both the introductory articles and the more advanced stuff, even when I'm struggling to understand it 😏 Passing along your links and compliments over at BlueSky as well.