Facebook has announced that it is making some algorithm tweaks to stop the spread of “sensationalist and provocative” content in the news feed. Over the top posts that are designed to get a reaction out of people, such as inflammatory partisan commentary, sexual content, and intentionally divisive groups and pages will not be banned outright, but content that is deemed needlessly provocative will be deprioritized in the newsfeed. But could this new Facebook algorithm change impact advertisers?
If you are not using shady practices to get clicks on your ads or promoted content, it does not appear that advertising content will be unintentionally deprioritized. The problem though, as we have encountered before, is what will be determined to be borderline content and how that determination will be made. If you are a healthcare organization that offers pregnancy and childbirth classes, for instance, and you use a picture of a mother nursing her newborn, will that be considered borderline content? Maybe. In 2015, Facebook was embroiled in controversy for taking down pictures of mothers breastfeeding for violating its nudity policy. Now we will be relying on an algorithm to make the determination of what is a “borderline” violation of the same policy, so it’s inevitable that similar issues will arise.
To address this, Zuckerberg also announced the development of an independent group to oversee the appeals process for content users argue was wrongly flagged. The company has not yet decided the makeup and criteria of this body, but it plans to begin receiving international feedback by the first half of 2019 and roll out an established group by the end of next year. Hopefully before it rolls out its changes to borderline content.
By the end of 2019, Facebook claims its AI should be sophisticated enough to identify “the vast majority” of problematic content such as fake accounts and instances of self-harm or hate speech. The flood of garbage content, memes, fake news, and inflammatory posts have made our newsfeeds super annoying at best and a source of our division and even violence at its worst. While we always have to be wary and vigilant when it comes to having speech regulated by a private company, this should end up being a welcome change.
Events over the last few years have made Facebook realize that it is not only a hapless distributor of content, but the main source of information for a great portion of the world. With that, Facebook has both great power and great responsibility to be the editor and curator of that content. While people may cite the First Amendment when posts they like go away and hate groups will bring their base to a boil when their clickbait content is inevitably made invisible, but when it comes down to it, Facebook is a public company, and they have the right, and I say a duty, to police the content that it allows on its site.
Will there be unintended consequences as Facebook slowly becomes the world’s biggest newspaper editor? Absolutely. Will it be a giant mess for advertisers as we try to adapt to an endless stream of algorithm changes and content prioritization rules? You know it, but that’s just a part of the business we are in. If nothing else, Facebook always keeps us on our toes.
Get all the details about the latest round of changes in Facebook’s Blueprint for Content Governance and Enforcement