I think we can move along, before this turns into the EOF.
I was king of EOF long before you came up with your first shitpost, don’t threaten me with a good time when you know you won’t be able to keep up.Afraid of being promoted?
Oh, this is GOLD. I'm saving this quote somewhere.With McCarthy at the helm, they're Noah's Ark without a captain, and the monkeys are flinging shit everywhere.
He should start a blog.Oh, this is GOLD. I'm saving this quote somewhere.
Cop out.You’re clearly more interested in doing your own thing instead of sticking to the topic, so I’m just going to ignore your attempts at detailing the thread.
I don’t think anyone has a problem with a platform removing content that is not compliant with their TOS or community standards. SM’s are private companies, they get to decide what is or isn’t hosted on their servers or who gets to use them. You cross into murky territory when the government starts exerting pressure on them to censor content - that’s what’s not acceptable, and arguably unconstitutional.Cracking down on the spread of misinformation vs. censoring "free speech" are two very different things. Sure, you can *freely* spread fake news, false "covid vaccine kills people" nonsense, etc... but that doesn't exempt you from consequences, like having the misinformation you posted be removed from public view and/or being banned from the platform for such offenses. Especially if it's in the private company's TOS agreement.
Consequences... Funny word, that one. It's something always being overlooked by the right-wing nutjobs when they're flying off the handle.
Where does hosted/promoted content cross the line into being unconstitutional, though? Seems like stochastic terrorism is probably a good bet, and I can think of at least one platform which fights tooth and nail to allow for it. There has to be some mechanism by which third-party moderation is introduced when the original moderators are just there for decorative purposes.I don’t think anyone has a problem with a platform removing content that is not compliant with their TOS or community standards. SM’s are private companies, they get to decide what is or isn’t hosted on their servers or who gets to use them. You cross into murky territory when the government starts exerting pressure on them to censor content - that’s what’s not acceptable, and arguably unconstitutional.
Most right-wing nonsense is anti-government, so I can't really blame the government for wanting to step in there.I don’t think anyone has a problem with a platform removing content that is not compliant with their TOS or community standards. SM’s are private companies, they get to decide what is or isn’t hosted on their servers or who gets to use them. You cross into murky territory when the government starts exerting pressure on them to censor content - that’s what’s not acceptable, and arguably unconstitutional.
Sounds like Discord, 100%.There has to be some mechanism by which third-party moderation is introduced when the original moderators are just there for decorative purposes.
Nobody's said anything about consequences for what you say. They're talking about the government censoring people in a way that simply screams "We found a loophole!". If the sites themselves decide to ban you for what you say, all the power to them. I'm sure a number of people banned on this site were removed because of what they said/did on this site. I have no argument against that whatsoever.Cracking down on the spread of misinformation vs. censoring "free speech" are two very different things. Sure, you can *freely* spread fake news, false "covid vaccine kills people" nonsense, etc... but that doesn't exempt you from consequences, like having the misinformation you posted be removed from public view and/or being banned from the platform for such offenses. Especially if it's in the private company's TOS agreement.
Consequences... Funny word, that one. It's something always being overlooked by the right-wing nutjobs when they're flying off the handle.
The problem is that Republicans are trying to protect criminal activity by conflating it with "free speech." I say social media sites that don't remove such content in a timely manner should be held liable for it, along with the person that posted it. That would be consistent with laws against sites hosting pirated content, and downloading an illegitimate copy of Tears of the Kingdom isn't exactly equivelant to doxxing someone or sending them death threats.It's when the government steps in and coerces the site into banning and censoring people that there's an issue. That's not just "consequences for what you say", that's flat-out censorship. That's what people are upset about and what the discussion is about.
If I hire a hitman to kill somebody for me, that's still me breaking the law and getting charged with murder. If I blackmail somebody to kill for me, that's STILL me breaking the law and getting charged with murder. The end result still gets tracked back to me and I get punished for it. Taking that logic towards this, if the government is pressuring/blackmailing a website into censoring/banning/deplatforming/removing somebody to silence them, that is still the government censoring free speech. The website wouldn't have done so without the government making them do it.
Aren't they back on?NewsMax being forced off of cable also helped.
I don't know, haven't been keeping track.Aren't they back on?
For a site the size of Facebook or Twitter or whatever, I would do things slightly differently.The problem is that Republicans are trying to protect criminal activity by conflating it with "free speech." I say social media sites that don't remove such content in a timely manner should be held liable for it, along with the person that posted it. That would be consistent with laws against sites hosting pirated content, and downloading an illegitimate copy of Tears of the Kingdom isn't exactly equivelant to doxxing someone or sending them death threats.
This doesn't fix the problem if the human moderators are also in support of allowing the site to host criminal activity, though. With the exception of Twitter, all the 'mainstream' social media sites do a pretty good job of removing that stuff quickly. It's more the 'Truth Socials' of the world that are consistently and persistently crossing the line.I don't trust AI to auto-moderate things, but I would be fine if it tagged posts and sent the posts for manual review. Posts that are both tagged by the AI as well as reported by actual users would be bumped up the priority list. (Also, users reporting a post would direct the AI to double-check said post ASAP as an initial measure to ensure if necessary, it makes it onto the moderation list, but it would not take direct action itself) Anything that comes up that is obviously illegal under laws that existed at the time of the original posting would be forwarded to the authorities, as they are already illegal. Anything that is illegal under current laws that did NOT exist at the time of the original posting would be removed and replaced with a notice that explains the reason and points out the specific law that the post violates. (It would not be fair to punish somebody for something that was perfectly fine at the time.)
I would consider something like this enough to be considered good-faith moderation and prevent the social media sites from being held liable, under the final condition that some sort of authority were to perform periodic inspections of the moderation policies on the company's side to ensure that they aren't doing anything like drastically understaffing or underfunding or whatever. After all, even with a process like this, the sheer size of these websites means that occasional things are very likely to slip through the cracks and remain unnoticed for a length of time.
This whole thing would be specifically for ILLEGAL content only. Anything submitted that does not break a specific law would be the same penalty as filing a false police report. The government does NOT get to step in and insist that something that does not break any existing laws gets removed. If the site wants to simply moderate and remove anything that goes against their specific policies, they're free to do so.
Free speech is specifically protected for the purposes of criticising the government (among other reasons), be it rightfully or wrongly. You just listed one of the core reasons why it’s protected speech under the First Amendment.Most right-wing nonsense is anti-government, so I can't really blame the government for wanting to step in there.
The First Amendment is a limitation placed on the government, not the platform. If it’s not illegal, it is necessarily protected and a moderation decision is up to the platform. I don’t know what’s confusing here. We already know that the government has access to special portals which fast track moderation of posts that contain illegal content - that’s your mechanism.Where does hosted/promoted content cross the line into being unconstitutional, though? Seems like stochastic terrorism is probably a good bet, and I can think of at least one platform which fights tooth and nail to allow for it. There has to be some mechanism by which third-party moderation is introduced when the original moderators are just there for decorative purposes.
Did you miss the part about the "authority performing periodic inspections" I mentioned? Easy solution.This doesn't fix the problem if the human moderators are also in support of allowing the site to host criminal activity, though.