The discourse on censorship often revolves around Twitter, but recent events have shed light on YouTube’s challenges in handling domestic terror threats. In 2022, a disturbing case involving a YouTuber’s alleged neo-Nazism and threats of racist murders came to the FBI’s attention.

When the agency asked Google for help to look into the threat, they received an unexpected reply. Google stated that it did not have the resources to assist, raising doubts about its ability to react quickly and effectively to the FBI’s emergency disclosure request (EDR). This incident pointed out possible issues in addressing urgent threats to life and highlighted YouTube’s policies on dealing with potential dangers.

This incident has sparked questions about Google’s role in combating domestic terror and its approach to content moderation.

Google’s Response to the FBI’s YouTube Request

According to Forbes reports, the FBI received an anonymous tip about a YouTuber who had reportedly threatened to commit a racially-motivated attack in St. Louis, Missouri. Concerned about the credibility of the threat, the FBI urgently requested information from Google regarding the YouTuber’s livestreams.

However, Google’s legal staff informed the agency that they were overwhelmed with a significant influx of emergency disclosure requests, leaving them unable to respond fully. This left the FBI unable to determine if any violent threats had been made, highlighting Google’s apparent limitations in effectively dealing with domestic terror threats.

Seamus Hughes, a senior research faculty member at the University of Nebraska Omaha’s National Counterterrorism Innovation, Technology, and Education Center, stressed the need for prompt and thorough action.

The specific date provided for a possible attack and concerns about the YouTuber’s alleged associations with racist and Nazi propaganda underscored the urgency.

This incident has led to questions about social media companies’ claims of prioritizing safety on their platforms over profits. It also brings into question their ability to effectively police their own sites.

Hughes said:

Social media companies have assured the public, in particular Congress, for years that they are able to police their own sites and have said they made safety on their platforms a priority over profits. The facts laid out in this search warrant questions that assertion.

YouTube’s Partial Review and The FBI’s Suspicions

Although Google could not immediately provide the information the FBI sought, it did conduct a partial review of some archived live streams from the YouTuber. During this review, Google found evidence of the YouTuber performing a Nazi salute and discussing hatred of African Americans, using racial slurs.

However, citing a lack of resources, the Mountain View, California-based company refused to provide the videos to the FBI without a search warrant, stating the need for further review and examination of the content.

The FBI’s investigation into the YouTuber’s account revealed a focus on reviewing and discussing heavy metal music albums, some of which appeared to support white supremacist and neo-Nazi ideologies.

Although the suspect has not faced any charges, the FBI raised concerns about the individual’s potential affiliation with neo-Nazi-inspired music and ideologies as well as the alleged specific threat of an attack.

Despite the alleged racist slurs made during a YouTube livest ream, Google has not taken any action to ban the YouTuber. The individual has continued to post updates about their extensive metal collection to a following of over 6,000.

YouTube is Removing So Many Creators

YouTube’s terms of service forbid videos that infringe copyrights, display pornography, promote racism, illegal activities, gratuitous violence, or hate speech.

If users upload such content, their videos may be taken down, and a display a message stating, “This video is no longer available because its content violated YouTube’s Terms of Service.” Moreover, Google holds the right to terminate any account, with or without notice.

According to Statista, in the fourth quarter of 2022, around 5.66 million videos were taken down from YouTube, which includes those flagged automatically for violating community guidelines. In contrast, only 318 thousand videos were removed through non-automated flagging systems.

Number of videos removed from YouTube worldwide from 4th quarter 2017 to 4th quarter 2022 | Statista

In the last quarter of 2022, YouTube took down more than six million channels from its video-sharing platform. This number rose from the previous quarter’s nearly six million removed channels.

YouTube claims that it removes channels if they commit three Community Guideline violations or a single serious violation of the platform’s guidelines. Unfortunately, YouTube doesn’t provide much information about why accounts are most often banned. Many could be as trivial as a baseless series of copyright strikes or as serious as deadly threats.

Number of channels removed from YouTube worldwide from 1st quarter 2019 to 4th quarter 2022 | Statista

Elon Musk’s Acquisition of Twitter and Censorship Allegations

While YouTube events are happening, Twitter has been a prominent platform involved in the censorship discourse and discussions surrounding free speech. Twitter has faced scrutiny and controversy for its decisions to suspend or permanently ban users, including public figures and political leaders, for alleged violations of its policies.

These actions have sparked debates about the balance between free speech and moderation on social media platforms.

Number of Twitter accounts suspended by year | WhizCase

WhizCase’s recent research reveals that approximately 14 million Twitter accounts have been suspended in the last decade. Terrorism, violent extremism, and hateful conduct were among the reasons the users’ accounts were suspended.

Twitter account suspensions by reason | WhizCase

Twitter’s policies on content moderation and the enforcement of its rules have been the subject of intense public scrutiny, with some users and commentators expressing concerns about potential bias and inconsistency in decision-making.

Some critics argue that the platform’s approach to censorship may have implications for political discourse and the free exchange of ideas.

When Elon Musk bought the platform in 2022, his goal was to improve free speech and reduce political bias. Musk’s time as Twitter’s owner has faced controversies over free speech. One issue was removing tweets criticizing India’s Prime Minister Narendra Modi, as requested by India’s information ministry.

Twitter has suspended accounts that track private jets owned by billionaires, including Musk’s. However, leaked internal emails known as the “Twitter Files” showed former executives discussing controversial content moderation with government officials. On the other hand, we have almost few insights into why most YouTube channels are banned, making it difficult to tell if YouTube is also censoring certain political speech. Certainly fewer creators on the platform have complained publicly about seemingly targeted bans than on other platforms like Twitter but it may still be happening to a lesser degree.

YouTube and Twitter’s censorship incidents highlight the importance of social media companies prioritizing safety, transparency, and responsible content moderation. Upholding free speech while addressing extremist content and potential threats becomes crucial.

What's the Best Crypto to Buy Now?

  • B2C Listed the Top Rated Cryptocurrencies for 2023
  • Get Early Access to Presales & Private Sales
  • KYC Verified & Audited, Public Teams
  • Most Voted for Tokens on CoinSniper
  • Upcoming Listings on Exchanges, NFT Drops