X continues to suck at moderating hate speech, according to a new report::A new report from the Center for Countering Digital Hate (CCDH) suggests X is failing to remove posts that violate its own community rules regarding misinformation and hate speech.
Of course it does. It’s literally owned and run by a fascist. I really wish people would stop expecting Twitter to run and behave like it used too. It will not and it never will again.
deleted by creator
Was it good before? Never used it, not my style tbh.
It was always a pile of garbage, but now it’s a toxic cesspit.
its not a problem when its feature
I’m annoyed they’ve dropped the ‘formerly known as Twitter’. Not while x.com redirects to twitter.com and not the other way around!
Can we be sure they’re trying to moderate hate speech? I don’t think they are.
Isn’t hate speech what they’re going for?
They’re run by literal fascists now, presumably they want it to be a platform for their kind?
It’s a feature not a bug!
Time for the EU to pull out the big FINE-HAMMER.
X is a Truth Social now.
By design.
Would that be this Center for Countering Digital Hate? https://open.substack.com/pub/taibbi/p/the-uk-files-a-history-of-the-center?utm_source=share&utm_medium=android&r=71rh9
I heard Matt Taibbi was under investigation by the IRS, you can’t trust him
He’s only a “so-called journalist” . So probably not.
X continues to suck.
Can we just get updates like this when something changes?
Can you really say they suck at it when they’re not even trying?
Don’t use it if you don’t like it. What’s with the obsession with harsh moderation over every facet of the internet? Is that really the future people want?
Is that really the future people want?
Maybe?
Anecdotally, I feel like I’m seeing more people pre-emptively censoring their own swears on the fucking internet. Also there’s a weird and bad puritan sentiment towards all things sex-related that feels like it popped up out of nowhere in the last few years.
This is the best summary I could come up with:
According to the CCDH, the reported posts, which largely promoted bigotry and incited violence against Muslims, Palestinians, and Jewish people, were collected from 101 separate X accounts.
Just one account was suspended over their actions, and the posts that remained live accrued a combined 24,043,693 views at the time the report was published.
X filed a lawsuit against the CCDH in July earlier this year over claims the organization “unlawfully” scraped X data to create “flawed” studies about the platform.
In a statement to The Verge, X’s head of business operations, Joe Benarroch, said that the company was made aware of the CCDH’s report yesterday and directed X users to read a new blog post that details the “proactive measures” it has taken to maintain the safety of the platform during the ongoing Israel-Hamas war, including the removal of 3,000 accounts tied to violent entities in the region and taking action against over 325,000 pieces of content that violate its terms of service.
X claims that by choosing to only measure account suspensions, the CCDH has not accurately represented its moderation efforts and urged the organization to “engage with X first” to ensure the safety of the X community.
After publication, Benarroch questioned the methodology of the CCDH’s study and claimed the organization only considers a post “actioned” after the account has been suspended.
The original article contains 476 words, the summary contains 224 words. Saved 53%. I’m a bot and I’m open source!