• Home
  • Tech News
  • Twitter Will Now Label Any Content It Flags as Hateful Speech

Twitter Will Now Label Any Content It Flags as Hateful Speech

twitter introduces labels for hateful speech content

Twitter is making a significant change to its approach to content moderation as tweets that are considered harmful to the community will now be labeled as such and will be limited in regards to the number of people they can reach.

In a blog post published yesterday, the Twitter Safety team revealed the new label, which will be initially applied to content that is considered hateful speech. The label will inform that the visibility of the publication has been limited as it violates the platform’s policy on this subject.

Twitter’s new approach to content moderation is called “Freedom of Speech, not Freedom of Reach”. This methodology is enforced to circumvent the often controversial binary approach of keep it up vs. take it down.

“These labels bring a new level of transparency to enforcement actions by displaying which policy the Tweet potentially violates to both the Tweet author and other users on Twitter”, the Safety team commented.

 

In practice, this means that people will be able to post content that is vicious and harmful to the public but they will not be able to reach as many users as they were before. This laissez-faire approach may be quite controversial but it takes a heavy burden off Twitter’s shoulders – at least in theory – as it eliminates the role of “content police”.

The Twitter Safety team emphasized that the levels may not initially affect the status of the account of the person that posted the content. However, an account whose tweets have been repeatedly labeled as hateful speech may be more rapidly banned.

Moderating Millions of Tweets Per Day is a Costly Endeavor

For the social network now owned by Elon Musk, it is no longer feasible to keep tabs on the millions of tweets published by the day primarily as thousands of outsourced content moderators were fired days after the head of Tesla (TSLA) took over the reins.

In addition, Musk has said on multiple occasions that Twitter has not been transparent in the past and has enforced shadowbanning practices including limited visibility without disclosing its approach to the public.

Even though the authors of the tweets that the system labels will be able to submit an appeal on the decision, Twitter does not guarantee that this complaint will be seen at the moment or that it will be responded to.

The Safety team commented that they are working to create an adequate procedure to process these appeals. Again, the issue at hand is the lack of manpower to perform that kind of work.

According to Twitter’s Hateful Conduct Policy, this kind of content refers to direct attacks against individual characters or groups based on their race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.

Labels Will Join Community Notes to Make Twitter a Better Place

These labels will come to join other initiatives that Twitter has been promoting lately to tackle the issue of free speech vs. adequate content moderation such as the creation of Community Notes.

With these notes, the Twitter community is in charge of providing any additional context that is considered relevant to a tweet to reduce misinformation and the spread of fake news within the platform.

Musk seems to be attempting to leverage Twitter’s large community to reduce the cost of moderating content and may progressively build a team that takes charge of this subject from a different perspective.

Moving forward, a content moderation team could be focused on making sure that these initiatives are functioning appropriately rather than hiring thousands of workers overseas to do the time-consuming job of browsing through millions of publications every day.

Other Related Articles: