Business

Gov Report Reveals First Official Details On How Elon Musk’s X Has Handled Censorship

(Photo by KIRSTY WIGGLESWORTH/POOL/AFP via Getty Images)

Daily Caller News Foundation logo
Jason Cohen Contributor
Font Size:

Billionaire Elon Musk’s X — formerly Twitter — has drastically cut staff conducting content moderation since he took over in October 2022, according to a report by an Australian safety regulator published on Wednesday.

Musk’s X has reduced its total worldwide trust and safety staff, including employees and contractors, by 30%, from 4,062 to 2,849 as of May 31, according to the report by Australia’s eSafety Commission. The commission legally compelled X to turn over this data and asserts this is the first public revelation of the company’s exact staff trimming numbers since Musk’s takeover, according to The Associated Press. (RELATED: ‘Blatant Targeting’: European Law Threatens Americans’ Free Speech Online, Experts Warn)

Moreover, X has reduced its worldwide trust and safety engineers by 80% from 279 to 55 and 78% of its worldwide public policy staff from 68 to 15 in the same timeframe, according to the report.

“You are creating a perfect safety storm,” eSafety Commissioner Julie Inman Grant stated, according to the AP. “Advertisers want to advertise on platforms that they feel are safe, that are positive and non-toxic. Users will also vote with their feet when a platform feels unsafe or toxic.”

Several advertisers abandoned Musk’s X shortly after a November report by left-wing activist group Media Matters alleged that ads show up alongside antisemitic content on the platform, as well as a post by Musk that was interpreted by some as antisemitic. Apple, Comcast, IBM, Lionsgate, Paramount Global, Sony and Warner Bros. Discovery all pulled their advertising from the platform following these alleged antisemitism controversies, according to reports.

X is suing Media Matters, alleging the study was defamatory and criticizing the study’s methodology.

X raised concerns of censorship in response to the eSafety Commission’s questions on its content moderation, according to the report.

“Given that X is inherently a public platform, we are sensitive to the risks that hate speech can pose not just at an individual level but at a societal level. At the same time, we are also aware of the risks of censorship and putting undue and unnecessary restrictions on freedom of expression as we build policies and enforcement protocols to address hate speech,” the company stated.

X also touted its freedom of speech, not reach policy that suppresses content’s visibility without removing it from the platform, according to the report.

“The company made a principled decision to move away from its binary take-down/leave-up enforcement approach and invest in visibility filtering as part of the moderation toolkit,” it stated. “We continue to prohibit posts that target specific individuals with hate, abuse and violence, but adopt a more proportionate remediation for posts or content that does not target specific individuals by restricting the reach of such content.”

X now takes 20% longer to handle user reports about posts and 75% longer to handle reports about direct messages, according to data on median response times in the report.

“eSafety expects Twitter to consistently and transparently enforce its own rules around harmful content on the platform, including in relation to hateful conduct, and to appropriately resource trust and safety functions,” an eSafety spokesperson told the Daily Caller News Foundation.

X did not immediately respond to the DCNF’s request for comment.

All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact licensing@dailycallernewsfoundation.org.