How security teams fail
When it comes to infosec, there are certain mistakes that most companies are more or less bound to make.
I spent 25 years working in information security. I published research, authored books, and led large security teams for publicly-traded companies. That said, when I kicked off this Substack in 2022, I wanted to try something different — so with few exceptions, I shied away from infosec punditry.
Still, there’s one infosec article I always wanted to write: an examination of why corporate security teams fail. This is not meant as a condemnation of the industry; think of it as a matter of ethnography.
Part I: the origins
Very few founders make a plan to hire information security staff on day one. This makes sense: their initial focus is proving a business idea, not building a bureaucracy. Even past the proof-of-concept stage, many other administrative roles will be likely staffed long before infosec.
The first security hire is typically a consequence of a near-miss of some sort: a major malware infection, a contentious departure of a prominent developer, a PR lapse. Sometimes, the misadventures of others are motivation enough: the steady trickle of breach-related headlines might prompt the founder to say “you know what, I don’t want to be next”. In some industries, regulatory pressures also play a role — but this is not a major consideration for most of early-stage tech.
The first full-time infosec headcount usually materializes by the time the company reaches about a hundred people; in the run-up to that moment, a single employee with some knowledge of the domain might emerge as the go-to contact for all things security. The person hired or promoted into the full-time role is almost never a seasoned pro with experience building security programs. The founders themselves might be industry veterans with a clear vision for the product they want to build or the culture they want to foster; for security, the same kind of clarity simply won’t exist.
It follows that in the first months-come-years, the priorities of the fledging security organization will revolve around putting out existing fires: there will be identities to provision and deprovision, computers to patch, and ports to scan. Early hiring efforts will focus on finding relatively junior infosec staff to build sustainable teams focused on these basic, tactical tasks.
Part II: the entrenchment
After a couple of years, this organizational structure becomes more or less ossified: there is a penetration testing team, a vulnerability management team, a group approving product launches, and so on. A couple of passion projects championed by individual employees may be flourishing on the side, but that work usually has little to do with business priorities.
This creates an interesting problem: none of these groups owns any clear business outcome per se. They run projects; projects come with an endless supply of features to implement, bugs to fix, and performance improvements to make. If you ask the teams to plan ahead, they will catalog these project-specific objectives and ask for more headcount for their thing. By and large, they’re not going to question the need for the existence or the continued growth of their own roles.
At this point, senior employees are typically aware of more urgent unaddressed risks and critical gaps in visibility, but few would want to give up headcount or switch to less glamorous work. A more common approach is to flag the concerns to upper management, hoping for some some extra-extra headcount to materialize. When that doesn’t happen, you just go about your day.
Part III: cultural rifts
The bulk of the early-years infosec work is reactive: it revolves around squashing bugs, not building platforms at a scale. Over time, this lends itself to a widespread conviction that the team’s instincts are keener — and their intellect sharper — than that of an average software engineer or program manager. After all, we’re paid to point out all the subtle mistakes that other employees miss.
Of course, this is illusory: if the team’s career prospects depended on shipping products or improving user conversion rates, they’d be expending their cognitive budget on these tasks, not on hunting for privacy side channels or cross-site scripting bugs. Still, because the perception of superiority goes unchallenged, it eventually morphs into the belief that the security team is a shining beacon of virtue in a sea of wickedness. At that point, instead of trying to understand what the company is trying to do or helping engineers write safer code, we end up penning apocalyptic missives, throwing tantrums, and asking the executives to ceremonially “accept risk”.
This is a vicious cycle: the more day-to-day drama we cause, the more likely the execs are to assume that security is just habitually crying wolf. It also means that some other teams will go out of their way to avoid interacting with us, to the detriment of the entire company.
Part IV: the apocalypse
The process almost inevitably culminates in a major breach. The breach is not always preventable: for example, there’s not much that a mid-tier tech company can do to repel a top-tier state actor. That said, most of the time, the compromise involves an attack surface that the team has long recognized as a major concern — yet never adequately addressed.
The breach validates the team’s instincts but condemns its strategy. In fact, in multiple cases, the most significant consequence of the breach wasn’t the damage caused by the hackers themselves; it was the regulatory fallout that followed the discovery of damning messages about corporate risk exchanged between the members of the infosec team.
The extraordinary circumstances surrounding the breach provide a rare opportunity to rethink the team’s org structure without the risk of losing talent; the deal is also sweetened by extra headcount earmarked by the execs for incident-related work. The post-compromise team is more principled, more strategic, and better at communicating with the rest of the company — at least for as long as the memory of the incident lives on.
👉 For a catalog of other articles on this blog, click here.
I write well-researched, original articles about geek culture, electronic circuit design, and more. If you like the content, please subscribe. It’s increasingly difficult to stay in touch with readers via social media; my typical post on X is shown to less than 5% of my followers and gets a ~0.2% clickthrough rate.
I don't have nearly as much experience in the industry as you do. But there's what I've learned so far: It doesn't matter if the customer doesn't want to protect himself because he's skittish or simply unaware of the risks. Often an argument along the lines of “Be lazy - do it now, you'll have less to do.” works. Focus on the low-hanging fruit. Think of it like tackling the most visible weeds first. Basic, fundamental cybersecurity practices offer the biggest bang for your buck.
Indeed. My approach with my clients has always been "build-in security from the start". Sometimes clients are willing, sometimes (more often) they're cheap or misinformed as to the risks. I do try, though...