Around 2010, a prominent author by the name of Eli Pariser popularized the concept of filter bubbles: the purported fragmentation of the intellectual fabric of our life, brought upon by the internet. Mr. Pariser explained that, owing to personalization algorithms on social media, the average person was increasingly sheltered from divergent viewpoints or serious political debate. Instead, they were only being served the headlines that aligned with their views.
The idea took the world by storm. For one, it neatly pinned the blame for the growing polarization in Western politics; the intellectuals were of course immune, but it helped explain the failings of the common man. We could fix it easily, too. We didn’t even need censorship: tech companies just had to forcibly expose people to real facts.
It sounded great — but to me, the filter bubble theory never stood up to scrutiny. There’s no doubt that compared to 20-30 years ago, we’re far more frequently exposed to unsettling stories, behaviors, and viewpoints. The defining habit of the early 21st century is getting angry at people who are wrong on the internet. Heck, the old media has embraced it too: if you tune in to MSNBC or Fox News, most of the political punditry over there is focused not on calls to action, but on the antics of the other side. The apocryphal Fox News-watching grandpa can quote Alexandria Ocasio-Cortez and discuss the transgender policies of a school in Florida. Your average progressive knows how to rebuke every conceivable angle of COVID skepticism.
Filter bubbles don’t explain what’s going on. If anything, their absence might: we’re constantly bombarded by contemptuous, fringe viewpoints that traditionally wouldn’t be fit to print. It feels that the world has gone mad, and in this reality, it’s much easier to summarily dismiss the outgroup as evil or dumb. Just look at all the loud, malicious idiots they have over there!
In parallel with this, the rage-centric culture pulls us toward the fringes of our ingroup. We’re served a menu of seductive theories that explain what’s wrong with the world — and how certain others are to blame.
Most of the online radicalization we see today probably has to do with being exposed to unpleasant, frustrating content far too often — and then coming across a community that channels your anger and pain.
Bursting filter bubbles doesn’t fix the world. It probably makes things worse.
For a thematic catalog of articles on this blog, click here.
I agree with the premise for sure. Like Mr. Parser ("parser", heh), I tended to agree about what he calls "filter bubbles", but now I think it's more subtle than that and I think your explanation is much closer to how things actually work. And one definite disagreement with Mr. Parser - the elites are most definitely NOT immune, and in fact I would argue, are more susceptible to filter bubbles. IRL, when I've hung out with what I call the "intellectual class" - academics, professors, scientists - and/or moneyed folk, their sphere of experience tends to be vanishingly small. In many cases, college has become the ultimate filter bubble, with even mildly divergent views being shouted down. And the wealthy, while they DO throw great parties, rarely ever have to deal with - much less interact with enough to understand - viewpoints different from their own. Not saying that blue collar folks live in a wonderland of diversity, but ask a wealthy person if they've ever (outside of the "some of my best friends are <insert minority class>" trope) spent enough time with a person with wildly different life experiences than their own to really get to know and understand them. No, I still believe there's a genuine fear among the elite that if we start sitting down together, we'll realize the only TRUE cultural difference is "the haves" vs "the have-nots" and that might spell the end for the carefully-crafted divisions they've spent so much effort constructing.
That's apparently Eli Pariser, not Eli Parser.
It sounds quite reasonable in principle and I'm nodding along, but it's also pretty low on evidence. What would count as evidence, though? Looks like there are some studies described on Wikipedia. [1]
My take is pretty evidence-free too, but I'll explain it anyway. I like to take an epidemiologist's view of social media. We are exposed to lots of different memes and links, some of which we share. Liking and especially sharing a meme means that you're susceptible to it. The ones you ignore or reject, you're immune to. The things people share show what they're vulnerable to. (Personally, I like a good contrarian argument.)
Many people are more likely to share *bad* and especially outrageous arguments of the other side (in order to argue with them) than they will share reasonable arguments of the other side. That's a filter, though perhaps not the one envisioned by the "filter bubble." Call it a polarization filter.
"Filter bubble" seems like a vague enough concept that we could get mired in definitions. Being exposed to bad ideas of the other side is definitely not *sheltering,* but I'm not sure that anyone thought it was? The Wikipedia article talks about a "splintering" effect, but I'm not sure that's true either.
[1] https://en.wikipedia.org/wiki/Filter_bubble