It sounds quite reasonable in principle and I'm nodding along, but it's also pretty low on evidence. What would count as evidence, though? Looks like there are some studies described on Wikipedia. [1]
My take is pretty evidence-free too, but I'll explain it anyway. I like to take an epidemiologist's view of social media. We are exposed to lots of different memes and links, some of which we share. Liking and especially sharing a meme means that you're susceptible to it. The ones you ignore or reject, you're immune to. The things people share show what they're vulnerable to. (Personally, I like a good contrarian argument.)
Many people are more likely to share *bad* and especially outrageous arguments of the other side (in order to argue with them) than they will share reasonable arguments of the other side. That's a filter, though perhaps not the one envisioned by the "filter bubble." Call it a polarization filter.
"Filter bubble" seems like a vague enough concept that we could get mired in definitions. Being exposed to bad ideas of the other side is definitely not *sheltering,* but I'm not sure that anyone thought it was? The Wikipedia article talks about a "splintering" effect, but I'm not sure that's true either.
For your opening paragraph - FWIW, I think we use evidence as a crutch in conversations like this. It's really hard to conduct rigorous experiments in sociology; a lot of fashionable social studies fall apart under the slightest amount of scrutiny. I'd wager that bad evidence outnumbers good evidence in social policy debates by a pretty wide margin.
I think it's OK to make observations about life, have opinions, and form arguments without backing everything with a trend line drawn through a scatter plot. Abstract reasoning is subject to being independently reproduced, challenged, falsified, or revised over time, too.
For your main point - the original thesis of "filter bubbles" was very specifically about being shielded from what you don't want to see (by algorithms or otherwise). The Wikipedia intro gives an example:
"According to Pariser, users get less exposure to conflicting viewpoints and are isolated intellectually in their informational bubble. He related an example in which one user searched Google for "BP" and got investment news about British Petroleum, while another searcher got information about the Deepwater Horizon oil spill, noting that the two search results pages were "strikingly different" despite use of the same key words"
One can reframe the concept into something softer. I don't dispute that there are viewpoint biases in the news we get, and that there is a lot of self-selection going on. My main point is that these bubbles don't filter information about the world; you know more than before, and it's making you angrier than before. I'm not just debating semantics, I just think that prescriptions along the lines of "well, let's show people more news from a variety of sources" are fundamentally misguided.
If you can erect a real filter bubble... being sheltered from 24x7 outrage almost certainly makes you more content and less likely to develop extremist views.
In my opinion, bubbles are there. But they were not invented by social media. It's typical for our brains to create bubbles and refusing anything that doesn't fit the picture. What was natural for centuries, and what might even prevented us from going crazy (or becoming a great scientists/inventors), has been horribly, yet wilfully, amplified with new paradigm of information spread.
There may be no contradiction there. Filter bubbles may be as much about showing us things we already agree with, as about rage-baiting us with exaggerated misrepresentations of what we already disagree with.
Biases of this sort exist, but the point I'm making is that they are being made worse over-exposure to debate, not under-exposure to it. If we loosely interpret filter bubbles to mean "opinion bubbles", then sure, they're real. But I think they're just an attempt to make sense of the chaos we're exposed to, not the cause of it.
I agree with the premise for sure. Like Mr. Parser ("parser", heh), I tended to agree about what he calls "filter bubbles", but now I think it's more subtle than that and I think your explanation is much closer to how things actually work. And one definite disagreement with Mr. Parser - the elites are most definitely NOT immune, and in fact I would argue, are more susceptible to filter bubbles. IRL, when I've hung out with what I call the "intellectual class" - academics, professors, scientists - and/or moneyed folk, their sphere of experience tends to be vanishingly small. In many cases, college has become the ultimate filter bubble, with even mildly divergent views being shouted down. And the wealthy, while they DO throw great parties, rarely ever have to deal with - much less interact with enough to understand - viewpoints different from their own. Not saying that blue collar folks live in a wonderland of diversity, but ask a wealthy person if they've ever (outside of the "some of my best friends are <insert minority class>" trope) spent enough time with a person with wildly different life experiences than their own to really get to know and understand them. No, I still believe there's a genuine fear among the elite that if we start sitting down together, we'll realize the only TRUE cultural difference is "the haves" vs "the have-nots" and that might spell the end for the carefully-crafted divisions they've spent so much effort constructing.
A lot of this disturbing content is made more disturbing through deliberate editorializing, clickbait headlines and complete misrepresentation of facts. Only way to win is not to play.
That's apparently Eli Pariser, not Eli Parser.
It sounds quite reasonable in principle and I'm nodding along, but it's also pretty low on evidence. What would count as evidence, though? Looks like there are some studies described on Wikipedia. [1]
My take is pretty evidence-free too, but I'll explain it anyway. I like to take an epidemiologist's view of social media. We are exposed to lots of different memes and links, some of which we share. Liking and especially sharing a meme means that you're susceptible to it. The ones you ignore or reject, you're immune to. The things people share show what they're vulnerable to. (Personally, I like a good contrarian argument.)
Many people are more likely to share *bad* and especially outrageous arguments of the other side (in order to argue with them) than they will share reasonable arguments of the other side. That's a filter, though perhaps not the one envisioned by the "filter bubble." Call it a polarization filter.
"Filter bubble" seems like a vague enough concept that we could get mired in definitions. Being exposed to bad ideas of the other side is definitely not *sheltering,* but I'm not sure that anyone thought it was? The Wikipedia article talks about a "splintering" effect, but I'm not sure that's true either.
[1] https://en.wikipedia.org/wiki/Filter_bubble
For your opening paragraph - FWIW, I think we use evidence as a crutch in conversations like this. It's really hard to conduct rigorous experiments in sociology; a lot of fashionable social studies fall apart under the slightest amount of scrutiny. I'd wager that bad evidence outnumbers good evidence in social policy debates by a pretty wide margin.
I think it's OK to make observations about life, have opinions, and form arguments without backing everything with a trend line drawn through a scatter plot. Abstract reasoning is subject to being independently reproduced, challenged, falsified, or revised over time, too.
For your main point - the original thesis of "filter bubbles" was very specifically about being shielded from what you don't want to see (by algorithms or otherwise). The Wikipedia intro gives an example:
"According to Pariser, users get less exposure to conflicting viewpoints and are isolated intellectually in their informational bubble. He related an example in which one user searched Google for "BP" and got investment news about British Petroleum, while another searcher got information about the Deepwater Horizon oil spill, noting that the two search results pages were "strikingly different" despite use of the same key words"
One can reframe the concept into something softer. I don't dispute that there are viewpoint biases in the news we get, and that there is a lot of self-selection going on. My main point is that these bubbles don't filter information about the world; you know more than before, and it's making you angrier than before. I'm not just debating semantics, I just think that prescriptions along the lines of "well, let's show people more news from a variety of sources" are fundamentally misguided.
If you can erect a real filter bubble... being sheltered from 24x7 outrage almost certainly makes you more content and less likely to develop extremist views.
Yes, better filtering is the only way. There is too much junk.
In my opinion, bubbles are there. But they were not invented by social media. It's typical for our brains to create bubbles and refusing anything that doesn't fit the picture. What was natural for centuries, and what might even prevented us from going crazy (or becoming a great scientists/inventors), has been horribly, yet wilfully, amplified with new paradigm of information spread.
There may be no contradiction there. Filter bubbles may be as much about showing us things we already agree with, as about rage-baiting us with exaggerated misrepresentations of what we already disagree with.
Biases of this sort exist, but the point I'm making is that they are being made worse over-exposure to debate, not under-exposure to it. If we loosely interpret filter bubbles to mean "opinion bubbles", then sure, they're real. But I think they're just an attempt to make sense of the chaos we're exposed to, not the cause of it.
I agree with the premise for sure. Like Mr. Parser ("parser", heh), I tended to agree about what he calls "filter bubbles", but now I think it's more subtle than that and I think your explanation is much closer to how things actually work. And one definite disagreement with Mr. Parser - the elites are most definitely NOT immune, and in fact I would argue, are more susceptible to filter bubbles. IRL, when I've hung out with what I call the "intellectual class" - academics, professors, scientists - and/or moneyed folk, their sphere of experience tends to be vanishingly small. In many cases, college has become the ultimate filter bubble, with even mildly divergent views being shouted down. And the wealthy, while they DO throw great parties, rarely ever have to deal with - much less interact with enough to understand - viewpoints different from their own. Not saying that blue collar folks live in a wonderland of diversity, but ask a wealthy person if they've ever (outside of the "some of my best friends are <insert minority class>" trope) spent enough time with a person with wildly different life experiences than their own to really get to know and understand them. No, I still believe there's a genuine fear among the elite that if we start sitting down together, we'll realize the only TRUE cultural difference is "the haves" vs "the have-nots" and that might spell the end for the carefully-crafted divisions they've spent so much effort constructing.
A lot of this disturbing content is made more disturbing through deliberate editorializing, clickbait headlines and complete misrepresentation of facts. Only way to win is not to play.