Wasn't the shift you're describing also caused by realization that the west's 'marketplace of ideas' can be weaponized by foreign state actors, which do not subscribe to the ideals of freedom of expression?
At the same time those countries regulate the content that's available to their citizens very strictly.
I think that this openness combined with the very design of social media made performing active measures hugely more effective. I mean things like spreading disinformation, sewing discord, influencing elections, shaping public opinion...
For me, that's the biggest problem with the 'market of ideas' approach. I'm not arguing for or against it, but it's important to point out this.
Sure. Again, I'm not trying to litigate the merits of these interventions. I think there were good and bad reasons for it, positive and negative outcomes, and it's not a debate that can be ended with a blog post.
Having said that, we moved from from a mechanical approach to moderation (spam, porn, violence, etc) to policing truth, politics, and morality. And then LLMs took it to a logical conclusion, putting tech companies in a position where they're literally trying to teach general-purpose text generators what it means to be a good human.
The result is not only codifying SF Bay Area biases (which is something that conservative pundits love to complain about), but more broadly, just comically failing at the task.
Twitter has drastically scaled down moderation, and the result is that now anti-Semites, eugenics proponents and nazi-adjacent "race realists" are thriving.
It seems that platforms with lax moderation always end up attracting that crowd, so not sure "the marketplace of ideas" ever really existed to begin with.
The idea was that there are beliefs you like and beliefs you don't, but the society gains more if people can decide on their own and debate each other than if the government or a private entity sweeps certain views under the rug.
In practice, people advocating for violence would always get the boot, but shades-of-gray takes ("I'm not an anti-Semite, but just look at what the Jews are up to") were a feature of this system. Which is precisely why it ended up being contentious.
Now, what you're describing is a somewhat separate problem: that if your platform develops a reputation as a refuge for people with questionable views in a sea of heavily-moderated communities, you end up disproportionately attracting a certain kind of people: the outcasts from all the other places.
Is anti-vax a legitimate view that should be platformed for debate? Is "are black people genetically predisposed to manual labor" one?
It seems that some moderation is very much necessary, and then it becomes a question of how much and how to apply it. In some contexts, I would be fine with the above being discussed, but it very much depends on the platform and its particular type of user.
What I have seen a lot of is social networks, even with the supposed current "heavy moderation," end up creating cesspools of disinformation. Facebook is famous for this, and I've had close irl friends get radicalized on Facebook into pro-Russia, anti-EU, jews-want-to-kill-all-Christians-and-make-your-kids-trans muppets.
Social networks are not mainly places to debate ideas. They're mainly places where you find ideas you agree with, and move into more and more extreme versions of them over time, until you reach complete epistemic divorce from reality.
I mean, you're basically explaining why it's a contentious strategy and why it has fallen out of favor in Big Tech. But it's not my objective for the original post to get sucked into the defense of one option or another.
All I'm saying is that we ended up here because of a very palatable ideological shift, and that wherever you stand on this debate, LLMs mean that Big Tech companies are now literally trying to specify a coherent and comprehensive morality system, with hilarious and mildly concerning results.
Just to clarify: I agree with your take and assessment of LLMs.
Thing I don't know is, was there ever a moment in time where large platforms became platforms for genuine debate on hard topics? I can't think of any such time. The chans were always the chans. Some parts of reddit, maybe? But I would chalk that up more to niche communities within reddit, rather than reddit itself. Never was on slashdot, or neogaf, so can't speak to those. What is a good example?
Hmm… surely it’s a serious issue of freedom and liberty - most powerful tools are secretly censored, like removing books from library. Where are the libertarians here?
For completeness, it's probably worth noting that RLHF is not the whole picture for brand safety; for example, crude safety mechanisms are implemented through input filters, output filters, and hidden system prompts. That said, these are even less about any sort of a coherent moral compass; they're just band-aids to expediently fix problems without changing the model in any way.
Wasn't the shift you're describing also caused by realization that the west's 'marketplace of ideas' can be weaponized by foreign state actors, which do not subscribe to the ideals of freedom of expression?
At the same time those countries regulate the content that's available to their citizens very strictly.
I think that this openness combined with the very design of social media made performing active measures hugely more effective. I mean things like spreading disinformation, sewing discord, influencing elections, shaping public opinion...
For me, that's the biggest problem with the 'market of ideas' approach. I'm not arguing for or against it, but it's important to point out this.
Sure. Again, I'm not trying to litigate the merits of these interventions. I think there were good and bad reasons for it, positive and negative outcomes, and it's not a debate that can be ended with a blog post.
Having said that, we moved from from a mechanical approach to moderation (spam, porn, violence, etc) to policing truth, politics, and morality. And then LLMs took it to a logical conclusion, putting tech companies in a position where they're literally trying to teach general-purpose text generators what it means to be a good human.
The result is not only codifying SF Bay Area biases (which is something that conservative pundits love to complain about), but more broadly, just comically failing at the task.
Twitter has drastically scaled down moderation, and the result is that now anti-Semites, eugenics proponents and nazi-adjacent "race realists" are thriving.
It seems that platforms with lax moderation always end up attracting that crowd, so not sure "the marketplace of ideas" ever really existed to begin with.
The idea was that there are beliefs you like and beliefs you don't, but the society gains more if people can decide on their own and debate each other than if the government or a private entity sweeps certain views under the rug.
In practice, people advocating for violence would always get the boot, but shades-of-gray takes ("I'm not an anti-Semite, but just look at what the Jews are up to") were a feature of this system. Which is precisely why it ended up being contentious.
Now, what you're describing is a somewhat separate problem: that if your platform develops a reputation as a refuge for people with questionable views in a sea of heavily-moderated communities, you end up disproportionately attracting a certain kind of people: the outcasts from all the other places.
Is anti-vax a legitimate view that should be platformed for debate? Is "are black people genetically predisposed to manual labor" one?
It seems that some moderation is very much necessary, and then it becomes a question of how much and how to apply it. In some contexts, I would be fine with the above being discussed, but it very much depends on the platform and its particular type of user.
What I have seen a lot of is social networks, even with the supposed current "heavy moderation," end up creating cesspools of disinformation. Facebook is famous for this, and I've had close irl friends get radicalized on Facebook into pro-Russia, anti-EU, jews-want-to-kill-all-Christians-and-make-your-kids-trans muppets.
Social networks are not mainly places to debate ideas. They're mainly places where you find ideas you agree with, and move into more and more extreme versions of them over time, until you reach complete epistemic divorce from reality.
I mean, you're basically explaining why it's a contentious strategy and why it has fallen out of favor in Big Tech. But it's not my objective for the original post to get sucked into the defense of one option or another.
All I'm saying is that we ended up here because of a very palatable ideological shift, and that wherever you stand on this debate, LLMs mean that Big Tech companies are now literally trying to specify a coherent and comprehensive morality system, with hilarious and mildly concerning results.
Just to clarify: I agree with your take and assessment of LLMs.
Thing I don't know is, was there ever a moment in time where large platforms became platforms for genuine debate on hard topics? I can't think of any such time. The chans were always the chans. Some parts of reddit, maybe? But I would chalk that up more to niche communities within reddit, rather than reddit itself. Never was on slashdot, or neogaf, so can't speak to those. What is a good example?
RLHF is a whack-a-mole, we keep finding problematic behaviors and updating the training sets in response. Does it ever converge?
Hmm… surely it’s a serious issue of freedom and liberty - most powerful tools are secretly censored, like removing books from library. Where are the libertarians here?
For completeness, it's probably worth noting that RLHF is not the whole picture for brand safety; for example, crude safety mechanisms are implemented through input filters, output filters, and hidden system prompts. That said, these are even less about any sort of a coherent moral compass; they're just band-aids to expediently fix problems without changing the model in any way.