OSS backdoors: the folly of the easy fix
Intelligence agencies and Big Tech, not hobbyists, should shoulder the responsibility for preventing the next xz-style hack.
In an article posted here yesterday, I dove into the fascinating details of a remote code execution backdoor planted in a compression library known as liblzma. The backdoor was probably the most sophisticated (and brazen!) attack on the open-source ecosystem in the annals of computer security.
As the story broke, some commentators rushed to blame the outcome on the unhealthy relationship between unpaid maintainers and the companies who benefit from their work. I found the argument unpersuasive:
“The real issue with a lot of small, foundational OSS libraries is just that there isn’t enough to do. They were written decades ago by a single person — and beyond bugfixes, they are not really supposed to change much. You don’t do major facelifts of zlib or giflib every year; even if you wave some cash around, it’s hard to build a sustainable community around watching paint dry. “
Since then, a new set of voices — this time, originating from tech companies — started calling on open-source maintainers to beef up project governance. Some of the proposals include mandatory two-person code reviews, self-assessments, SLAs, and written succession plans.
I’m not sure I agree. It’s OK to gently nudge the ecosystem in the right direction — Github can! — but as the saying goes, you can’t out-Mossad the Mossad. This is not a defeatist stance: it’s just that we specialize for a reason. The maintainers of libcolorpicker.so can’t be the only thing that stands between your critical infrastructure and Russian or Chinese intelligence services. Spies are stopped by spies.
Even when it comes to lesser threats, the bottom line is that we have untold trillions of dollars riding on top of code developed by hobbyists. The companies profiting from this infrastructure can afford to thoroughly vet and monitor key dependencies on behalf of the community. To be clear, a comprehensive solution would be a difficult and costly undertaking — but it’s not any harder or costlier than large language models or self-driving cars.
Now that we had our wake-up call, the answer isn’t to lean on John Q. Maintainer and ask them to do better next time. The answer is to address any counterintelligence lapses and to greatly improve behind-the-scenes detection capabilities.
For a thematic catalog of posts on this blog, click here.
This is the best take on the topic (of single maintainers).
Hmm... If they are "burning" tons of energy on nice picture generation, cover letter creation or sloppy code snippets generation via those so "called-AI-things", maybe this energy could be used to create "AI-thing" that will detect issues in code? In my POV this would be better wasted energy...