My approach to teaching electronics
Explaining the reasoning behind my series of articles on electronics -- and asking for your thoughts.
On occasion, I quip about the way we teach electronics to hobbyists. We start with hydraulic analogies that compare electricity to water running through pipes. I think it’s a pretty bad teaching tool that overstays its welcome, but don’t close the tab just yet: today’s article isn’t going to be about that.
My more substantive gripe is that even as the hydraulic model starts springing leaks, we don’t offer better explanations of inductors, p-n junctions, or voltage sources. Instead, we jump straight to cryptic formulas and ready-made circuit recipes. “Here’s a non-inverting op-amp with a gain of 1 + R1/R2”, “here’s an RC circuit with a cutoff frequency of 1/(2πRC)”, “here’s a run of wire with a characteristic impedance of Z0 = √(L/C)”. Where do all these ideas and equations come from? “Oh, it’s grownup calculus, don’t you worry your pretty little head.”
I detest this approach. Electronics is by far the most popular and accessible of STEM hobbies; it’s one of the few crafts you can experiment with in an urban apartment without spending big bucks or running afoul of zoning laws. To nourish it, we ought to offer a curriculum that lets determined hobbyists gain deeper insights without needing a college degree.
Of course, complaining on the internet is worthless, so I decided to do a bit better: I embarked on mission to develop and popularize simple explanations of many of the more opaque axioms one needs to understand to get ahead. For example:
The behavior of capacitors and inductors: I explore the origins of the XC = 1/(2πfC) and XL = 2πfL equations that underpin much of analog signal processing. The usual textbook derivation of these formulas involves Laplace transforms or phasors. You don’t need any of that!
Radio receivers: I offer a simple trigonometric proof for why you can tune a radio by mixing RF signal with a locally-generated sine wave. You’d be hard-pressed to find this covered in any accessible intro to radio theory.
Combinations of passive components: why do we subtract the contributions of capacitors and inductors, but then mess around with square roots to account for resistors? Wikipedia answers this with complex exponentiation — but to show why the math makes sense, you just need to tilt a triangle. This also offers a gentle explanation why complex numbers are a valid representation for impedance in the first place.
The math of op-amps: popular accounts of op-amp theory are often incoherent, with claims such as that AOL can be assumed to be infinite, or that external resistors set the device’s gain. I derive op-amp equations in a series of articles: part 1 (general principles of signal amplification), part 2 (op-amp stability), part 3 (transimpedance circuits), and part 4 (noise, with a cool tie-in to statistics).
Sine wave RMS: root-mean-square voltages are occasionally useful, but almost never explained in an accessible way. I show that for sine waves, the formula can be derived by simply rearranging an arithmetic mean.
RC voltage curve: when charging a capacitor through a resistor, the voltage follows a pattern given by an oddball formula: 1 - e-t/(RC). Why? The answer is tricky, but you can get surprisingly far without serious calculus.
Higher-order filters: this is where novices are bombarded with exotic lingo of poles, zeroes, Q factors, and s-domains. I offer a gentle intro that’s rooted in real numbers and understandable circuit mechanics.
What’s up with the characteristic impedance of long wires (Z0 = √(L/C))? Wikipedia explains it with complex-number differential equations; for comparison, here’s my take.
Other circuit-adjacent topics: I have articles exploring why sine waves crop up so often, why the Fourier transform actually works, whether square waves exist at all, where do magnetic fields come from, what’s the deal with transistors, and so on. I write whenever I see a gap between pop explanations and reality. It’s not that I can always bridge the divide — but making the gap a bit smaller is still a worthy pursuit.
The most interesting takeaway from this exercise is that the articles seem to be well-received by hobbyists, but are met with a mix of apprehension and indifference from many EE folks. From their perspective, there is a smooth path from elementary school explanations to PhD-level knowledge. Based on my experiences, it never felt that way.
Your articles are always something of a stretch for me (a beginner), but they are a great gateway to the more advanced explanations. The textbook stuff is, as you say, probably mostly unnecessary for the hobbyist. But I find the intuition I gain from a careful reading of your articles is a big help when I decide, out of curiosity, to dig deeper.
From the perspective of a non-EE dabbler, I think your articles fill an important void and I look forward to reading them. Thank you for your hard work on them!
I’m one of those “EE folks” with a matching degree (professionally a software engineer, though); never had problems with the math, mostly because I enjoy that sort of thing. I’m comfortable working in the abstract realm of formulas, but I appreciate your explanations for giving a more relatable concrete interpretation. For example, I really liked your article on signal reflections.
Personally I feel that these two perspectives are complementary and should in theory strengthen understanding. But in reality there are two problems:
* it’s a mistake to start the journey of learning electronics with the abstract perspective: it skips over something essential
* unfortunately, I agree that the concrete interpretations given are usually so poor and confusing as to be even worse than useless (eg, the hydraulic analogy).
I think this is basically what you’re saying. I think what you’re doing here fills a gap in electronics education.