9 Comments

It's probably worth adding that the WH argument rests to a large extent on counting CVEs. Counting CVEs is deeply problematic for a number of reasons. For one, it reflects organizational practices more than anything else, with the totals skewed toward a handful of vendors - such as Chrome or the Linux kernel - that are more forthcoming than others about internal fuzzing and hardening work.

But more importantly, CVE counts have almost no bearing on the real world, in the sense that they don't correspond to the ease of attacking any specific software, the likelihood that such attacks will occur, or their relative importance. Attempts to count CVEs for other reasons - e.g., to "prove" that a particular browser or OS is more secure than another - have been derided many times before.

The one thing I probably should have expounded on is the secondary attack surface: for example, the ability to escalate to a higher privilege level after an initial compromise of a browser renderer. This is a very specialized concern, but it's true that the C/C++ attack surface there is a bit larger than implied in my post. That said, the most robust defense is always to take away choices; for example, seccomp-bpf is almost certainly a far more robust defense than a Linux kernel written in Rust could ever be.

Expand full comment

Finally someone voiced what I have been thinking during all this “push everything to re-write in Rust”.. It’s nonsense. Memory safety is exploitable by only a small number of people, but anyone can phish or credential stuff!

Expand full comment

Most memory safety exploits will break with even a tiny modification to the program or system. Most logic and API/serialization bugs are much easier to exploit, and are much more portable. For example, the same payload that works for Apache servers will also work if sent in *minecraft* chat.

Expand full comment

Strong agree. When Cyclone was blazing a trail in memory safe systems language design 20+ years ago, it seemed like an exciting step forward. But in addition to the mitigations you mentioned, sanitizers, fuzz testing and better idioms in C++ have all made it far more practical to develop pretty darn safe code in C/C++ than it was a generation ago.

Expand full comment

Thanks for your perspective, but you seem to overlook the fact that there are a ton of crucial components running firmware that lack the notion of any security mitigations. This software is critically exposed and ripe for disruption.

Expand full comment

To make my argument more clear: I'm not opposed to memory-safe languages; I'm just very skeptical of blanket mandates to prioritize their deployment industry-wide.

To your point, there's definitely some of that - mostly network stacks, which I allude to - but there's also the question of whether we'd get more bang for our buck if we bring mitigations and other modern security practices to that platform, or if we provide a Rust toolchain and libraries for every piece of proprietary hardware out there and rewrite everything. If the calculus works out in favor of Rust, we should go for it.

On the other hand, if you hire me as the CISO of Nvidia and my first decree is "it's only Rust from now on", slap me with a trout...

Expand full comment

I don't think mitigations like ASLR, which is arguably the most disruptive one, are possible in those environments currently. On the flip side, there are already significant availability of Rust toolchains and crates in these areas. I certainly get, and agree with, the argument that rewriting everything is not the correct approach. I think we are aligned on where we should invest first. With 193 mentions of undefined behavior in the C99 spec, it's simply unwise to try to navigate that to develop software that you expect to be robust -- especially in the face of malicious intent.

Expand full comment

A bit of a tangent, but while I really dislike undefined behavior, but I also feel it's a bit of a red herring: not a whole lot of real security bugs trace back to that, and nothing is stopping anyone from shipping a fork of clang or gcc with UB taken out and replaced with defined or unspecified-but-sane behavior. All the existing code should compile cleanly on that - seems like a no-brainer, right?

It's clear that really either don't care all that much, or we don't want to do the work ourselves - we just want to gripe about others doing a bad job. And again, I think the C standard erred here and I wish they stopped digging, but do security teams really lose sleep over UB?

Expand full comment

People calling for complete elimination of memory unsafety seem to be unaware that systems programming is a thing. You can't write a bootloader, a kernel, a loader, a language runtime, or even certain tightly-optimized data structures without unsafe code. Sure, you can (and should) use a language like Rust that'll keep the parts which need to be unsafe cleanly delineated from the rest, but you're still going to have a whole lot them.

Expand full comment