A reactionary take on memory safety
How much money should be spent on a problem that's largely solved?
I was there when the universe formed: back in the 1990s, when information security wasn’t a reputable occupation, when case law was scant, and when every system could be breached at will. In the chaos, I saw an opportunity — and I dedicated a good portion of my adult life to helping others stay clear of online risks.
Throughout the years, one of the enduring problems I wrestled with was memory safety. For folks unfamiliar with the term, it boils down to programming languages such as C and C++ not keeping track of the memory allocated to the program and not preventing access out of bounds. This gives rise to endemic coding errors such as buffer overflows or use-after-free. With a bit of luck and skill, a subset of these bugs can corrupt program control structures and allow attackers to perform actions they’re not otherwise authorized to perform.
Today, there is a growing chorus of voices in the information security community positing that memory safety is the most consequential battle of our times; the latest salvo in this debate is the guidance published by The White House, urging the adoption of memory-safe languages across the industry. For many, a non-binding document does not go far enough: perhaps the government ought to outlaw C and C++ altogether, be it for federal contracts or for any infrastructure deemed critical.
This puts me in an awkward position: after all these years fighting for memory safety, I’m disagreeing with the proposal to solve the problem once and for all. My initial gut reaction is that as security professionals, we’ve gotten too accustomed to pinning the blame on others and having them do all the hard work. This proposal, in demanding that developers re-learn their craft, certainly fits that mold. In itself, this is not a real objection — but it does make the proposal suspect.
My more pragmatic critique is that I doubt it’s worth the cost. First, only a small fraction of memory-unsafe code is realistically exposed to attacks. In the context of modern computing paradigms, the primary attack surface is limited chiefly to network stacks, browser APIs, and a handful of multimedia transcoding libraries. For a handful of VM providers, the hypervisor code is of interest too. But together, these components probably represent less than 10% of the codebase targeted by the proposed mandates; and perhaps 2% is of major economic interest.
Further, thanks to the decades of work on low-overhead exploit mitigations — notably including address space randomization and branch tracking — successful exploitation of memory safety issues has gotten quite challenging and accounts only for a tiny sliver of the overall volume of security incidents. Although memory corruption bugs are favored by some government actors, virtually all large-scale breaches trace back to other issues: outdated or misconfigured software, phishing, and so on. This wouldn’t be an issue if we could fix all the issues at once; but in practice, you have finite budgets and staffing. You gotta pick your battles, and total memory safety might not be it.
Finally, I’m skeptical that a nonspecific mandate to adopt memory-safe languages will even nudge developers in the right direction; for every restrictive language designed with security in mind, there are dozens of memory-safe choices that on balance, introduce more bugs. In this context, the woes associated with PHP, SQL, and JavaScript are of special note; the trio is popular for good reasons, but is probably responsible for more harm than C and C++. Migrations to languages such as Python and Java might be marginally beneficial, but come with a lot of baggage of their own.
Together, these observations make me doubt that a wholesale migration to memory-safe languages is a worthwhile intervention today. Again, if we could wave a magic wand to transmogrify all C and C++ into Rust, I wouldn’t bat an eye — although I doubt it’d alter the landscape of information security in a profound way. But in a reality where even simply keeping track of your hardware and software assets is a formidable challenge for most enterprises, I think that the push for 100% memory safety is a grievous misallocation of funds.
For another article on the topic of infosec prioritization, click here. For a thematic catalog of posts, try this page.
It's probably worth adding that the WH argument rests to a large extent on counting CVEs. Counting CVEs is deeply problematic for a number of reasons. For one, it reflects organizational practices more than anything else, with the totals skewed toward a handful of vendors - such as Chrome or the Linux kernel - that are more forthcoming than others about internal fuzzing and hardening work.
But more importantly, CVE counts have almost no bearing on the real world, in the sense that they don't correspond to the ease of attacking any specific software, the likelihood that such attacks will occur, or their relative importance. Attempts to count CVEs for other reasons - e.g., to "prove" that a particular browser or OS is more secure than another - have been derided many times before.
The one thing I probably should have expounded on is the secondary attack surface: for example, the ability to escalate to a higher privilege level after an initial compromise of a browser renderer. This is a very specialized concern, but it's true that the C/C++ attack surface there is a bit larger than implied in my post. That said, the most robust defense is always to take away choices; for example, seccomp-bpf is almost certainly a far more robust defense than a Linux kernel written in Rust could ever be.
Strong agree. When Cyclone was blazing a trail in memory safe systems language design 20+ years ago, it seemed like an exciting step forward. But in addition to the mitigations you mentioned, sanitizers, fuzz testing and better idioms in C++ have all made it far more practical to develop pretty darn safe code in C/C++ than it was a generation ago.