FWIW, I've seen this compared to the Upton Sinclair quote: “It is difficult to get a man to understand something when his salary depends on his not understanding it” - but I think what I'm proposing here is a softer flavor of this. It's not about not understanding; it's about making a rational choice between three options:
1) Try to ship something that's great for the user, but reckless for the company or upsetting to many of your coworkers,
2) Ship something that's less good for some users, but doesn't harm revenue and doesn't start any turf wars,
3) [An implicit universe of terrible choices that we're not going to make because we're good people.]
In that world, you pretty reliably pick #2. In isolation, these decisions make perfect sense. But in aggregate, they tend to chip away at user choice.
I like the way you have identified and described the problem, but what I think is missing here is any sort of a solution. How *would* someone of sufficient power (eg: policy-maker for a professional organization or CEO of a mid-sized company) go about changing those incentives?
This was a bit tough to read, and I know this is not a point you were trying to make or focus on with this article, but I have a hard time taking the "well-meaning engineer" too seriously who claims to attempt to protect people's privacy, while working for an ad-agency, whose whole business is based on renting their users' whole digital identities/data/personal information/whathaveyou to whoever is willing to pay. I can only assume you felt this dichotomy while writing this. Especially as you mention the absolution of personal guilt so early on in the article.
A related version of this phenomenon applies to privacy: if you're working on a corporate privacy team, it's pretty unlikely that anyone would ever approach you asking if we can collect less data or store it for a shorter period.
The requests nudge the organization in one direction, and their incremental nature often makes it hard to draw a line: after all, a retention period of 10 days is not hugely different from 5, and 15 days is not that different from 10.
Yes, I agree, however, things didn't used to be this way. This was slowly socially normalized by people trying to go along to get along. Not that long ago (10-15 years now), I was entrusted with privacy and corporate security and completely and absolutely empowered to simply say "no" to anyone and everyone that requested changes that might present risk. Then slowly, the social pressure to "not be that guy", "not be that org" started...and here we are.
It's not even about saying "no", it's about incrementalism: if you previously said "yes" to retaining user data for 10 days, do you have a compelling argument from principles to say "no" to extending it to 15 days? There's almost certainly some sensible business reason attached to the request.
It's easy to say "no" when you're crossing some well-defined line. But a lot of policy decisions are small and incremental. Death of a thousand cuts, if you will.
Not right now. I've been writing on the internet for a long time and I've accepted there's no platform that works for every reader. In the late 1990s, I had people laughing at me for picking a free publishing service akin to Geocities. Later, I received complaints when self-hosting (wrong feature set, wrong domain, etc). A move to Blogger proved controversial too. And now there are folks who have a beef with Substack.
The thing about Substack is that they make money by selling subscriptions, not by showing ads. If you're a free subscriber to a free blog, you're costing them money, not making them any. I suspect that one day they will try to monetize that, and it will be a cue to move...
If you want notifications without registering with Substack, you can get the content via the RSS feed, though: https://lcamtuf.substack.com/feed
FWIW, I've seen this compared to the Upton Sinclair quote: “It is difficult to get a man to understand something when his salary depends on his not understanding it” - but I think what I'm proposing here is a softer flavor of this. It's not about not understanding; it's about making a rational choice between three options:
1) Try to ship something that's great for the user, but reckless for the company or upsetting to many of your coworkers,
2) Ship something that's less good for some users, but doesn't harm revenue and doesn't start any turf wars,
3) [An implicit universe of terrible choices that we're not going to make because we're good people.]
In that world, you pretty reliably pick #2. In isolation, these decisions make perfect sense. But in aggregate, they tend to chip away at user choice.
I like the way you have identified and described the problem, but what I think is missing here is any sort of a solution. How *would* someone of sufficient power (eg: policy-maker for a professional organization or CEO of a mid-sized company) go about changing those incentives?
This was a bit tough to read, and I know this is not a point you were trying to make or focus on with this article, but I have a hard time taking the "well-meaning engineer" too seriously who claims to attempt to protect people's privacy, while working for an ad-agency, whose whole business is based on renting their users' whole digital identities/data/personal information/whathaveyou to whoever is willing to pay. I can only assume you felt this dichotomy while writing this. Especially as you mention the absolution of personal guilt so early on in the article.
A related version of this phenomenon applies to privacy: if you're working on a corporate privacy team, it's pretty unlikely that anyone would ever approach you asking if we can collect less data or store it for a shorter period.
The requests nudge the organization in one direction, and their incremental nature often makes it hard to draw a line: after all, a retention period of 10 days is not hugely different from 5, and 15 days is not that different from 10.
Yes, I agree, however, things didn't used to be this way. This was slowly socially normalized by people trying to go along to get along. Not that long ago (10-15 years now), I was entrusted with privacy and corporate security and completely and absolutely empowered to simply say "no" to anyone and everyone that requested changes that might present risk. Then slowly, the social pressure to "not be that guy", "not be that org" started...and here we are.
It's not even about saying "no", it's about incrementalism: if you previously said "yes" to retaining user data for 10 days, do you have a compelling argument from principles to say "no" to extending it to 15 days? There's almost certainly some sensible business reason attached to the request.
It's easy to say "no" when you're crossing some well-defined line. But a lot of policy decisions are small and incremental. Death of a thousand cuts, if you will.
Not right now. I've been writing on the internet for a long time and I've accepted there's no platform that works for every reader. In the late 1990s, I had people laughing at me for picking a free publishing service akin to Geocities. Later, I received complaints when self-hosting (wrong feature set, wrong domain, etc). A move to Blogger proved controversial too. And now there are folks who have a beef with Substack.
The thing about Substack is that they make money by selling subscriptions, not by showing ads. If you're a free subscriber to a free blog, you're costing them money, not making them any. I suspect that one day they will try to monetize that, and it will be a cue to move...
If you want notifications without registering with Substack, you can get the content via the RSS feed, though: https://lcamtuf.substack.com/feed
Why? What's wrong with substack? I love it! It's like Medium except it doesn't suck.