Imagine watching the Super Bowl, the biggest day in US sports, and finding out that the loud online crowd might actually be more of a silent ghost town.

A surprising report from cybersecurity experts CHEQ has revealed a shocking truth about X (previously known as Twitter). During this key event, an overwhelming 76% of its traffic might actually be from bots, not real people. This finding not only puts a question mark over the vibrancy of X’s online community but also impacts the whole online engagement and digital advertising world. If this is true, advertisers on X may be paying to advertise to mostly bots unknowingly.

As we dig into this issue, we’re faced with the reality of a platform struggling with authenticity. Let’s delve into the issue of bots on X, exploring how it affects making money, building trust, and truly connecting in our digital era.

The Bot Surge on X

A shocking report from cybersecurity firm CHEQ has revealed that a whopping 76% of X’s traffic during the Super Bowl might not have been real.

The Super Bowl, known for breaking viewership records, is usually a great chance for platforms like X to show off how much people are engaging with their content.

However, finding out that such a large amount of this engagement could be fake brings up serious concerns about how trustworthy online interactions are and whether digital advertising metrics can be relied on.

The Report

CHEQ’s report is based on 144,000 website visits from X during the Super Bowl weekend, taken from a pool of 15,000 clients. This data, although not comprehensive or scientifically sampled, shows a significant trend of fake traffic. CHEQ works to reduce online ad fraud by tracking how users, including bots pretending to be real users, interact with websites.

To understand how big this problem is, let’s look at overall internet traffic. A mix of real people and bots (automated programs) usually make up internet traffic. In 2023, Imperva, a cybersecurity company, reported that bots were behind 48% of all internet traffic, and a good chunk of these were harmful bots (30%).

Bad bot, good bot, and human traffic on the internet in 2022

When you compare this with the 76% of potentially fake traffic on X during the Super Bowl, it’s clear that the issue is much worse than the average, making us question how reliable and authentic online engagement on digital platforms really is.

This high level of possibly fake engagement doesn’t just change how we see the popularity and importance of content; it also has big implications for advertisers. These advertisers depend on engagement metrics to understand how well their ads are doing and whether they are worth the cost.

For example, a business owner named Gene Marks spent $50 on X ads, expecting traffic, but Google Analytics showed that none of the clicks came from X, despite X’s report of 350 clicks from 29,000 views.

Implications for Monetization Strategies

The discovery that much of X’s (previously Twitter) Super Bowl traffic might not have been real raises big concerns about its use for making money through ads and for content creators. This increase in fake traffic shakes the trust advertisers have in X to provide real engagement and a good return on their investment. When it’s unclear if interactions are genuine, advertisers may think twice before spending money on a platform possibly dominated by bots.

Furthermore, if X knows that such a large portion of its traffic is generated by bots (assuming this is true) and doesn’t inform advertisers, it could constitute false advertising or fraud, depending on the contract.

For those creating content, the situation is just as worrying. X promotes its monetization options, like X Premium, suggesting creators can earn a lot based on how much their content is engaged with. But, if much of this engagement isn’t from real people, then how much the content is truly liked and its potential to make money becomes questionable.

This creates doubt, especially with high-profile examples like popular YouTuber MrBeast, who made a ton of money from X in ways that might not be possible for others. X appears to be showing MrBeast’s videos in users’ feeds without making it clear they are ads – which mixes up genuine content with paid promotions (which might be an FTC violation if true). This approach creates a risky trend, implying that being seen and making money on the platform might mainly benefit those who are already famous or involved in secret paid deals.

MrBeast’s trial of posting videos directly to X was meant to test how well one can earn there. This one video that had already been seen by 10s of millions of people on YouTube made him $250,000 in ad revenue from X in just a week. Yet, the unclear and possibly manipulated engagement data makes us question if X can be a steady source of income for creators. This is particularly tough for smaller creators who depend on real engagement to support themselves.

For example, another much smaller YouTuber named “JerryRigEverything” complained that he made a measly $187.66 even though he racked up 17.6 million impressions. He noted that this kind of viewership would be worth $17,000-$30,000 on YouTube.

JerryRigEverything reached more than a tenth of the number of impressions that MrBeast had on that video that week. If he were paid at the same rate per impression as MrBeast, he would have earned over $25,000. Naturally, it’s hard to compare the two, and no social media platform pays creators the exact same rate. Nevertheless, it’s still shocking that JerryRigEverything was paid over 133 times less per impression.

Behind the Bot Problem

The increase in bot activity on X, especially during big events like the Super Bowl, may be linked to changes made since Elon Musk took over the platform. Musk’s ownership brought big plans to improve X, but it also saw more bots, making people wonder how these changes are affecting the platform’s reliability.

A key reason for more bots is the big cuts in staff, particularly those affecting the Trust and Safety team. This team plays a vital role in dealing with fake activities, but it was significantly downsized, with reports indicating up to 80% of its engineers and half of its content moderators were let go. With fewer people to spot and stop bots, fake accounts, and automated scripts have become more common on X.

Musk aimed to reduce costs and make operations smoother with these layoffs. However, this move has had the opposite effect, lowering the quality and trustworthiness of interactions on X. With a smaller team to tackle fake accounts and spam, X has seen an increase in bots, which can mislead advertisers and spoil the user experience by filling genuine discussions with fake interactions.

The Bottom Line

The bottom line is a clear call for action from everyone involved. X and any other social media facing similar problems need to work on better security, bring back and support their teams for checking content, and focus on creating advanced technology to spot bots.

Advertisers and content creators should also ask for clearer information and better tools to tell apart real interaction from bots to make sure they are treated fairly. The digital ecosystem is at a critical point where everyone needs to work together to keep social media true, reliable, and open for all its users. Trust in online spaces is crucial not just for the reputation of platforms like X but also for the foundation of how we communicate and do business online today.