In mid January a lot of us in the SEO world did a double take as we started scrolling through rankings for clients and major websites. Google had stirred the pot, and rankings had suddenly gone all over the place. It didn’t seem to be the Penguin or Panda update we’d all been waiting for, nope, Google soon confirmed (which they rarely do) that they had released a core algorithm update.

Early reports on Forbes, SearchEngineLand, and Moz agree that it happened, but exactly what it did still isn’t totally certain. Reading a variety of people’s ideas as SEO professionals look more closely at who’s ranking and who isn’t, gives us a pretty good picture of what’s happening though.

It’s About Content

Some big names took a dive in the rankings, especially old print publications like the Economist, the New Yorker, and the Atlantic. But Google doesn’t really have it in for traditional media, since other big publishers suffered no real penalty, and others, like Time.com actually went up.

Some SEO’s suggest that a lot of those older massive websites were hit because of the large amount of older content, which seems to have lost a lot of its staying power in terms of search rankings. Evergreen content, as it were, might be wilting.

Marcus Tober, CEO of SearchMetrics, asserts that it’s part of a move by Google to focus more heavily on user intent, and to move away from focusing on the quality of content in isolation.

To support this, he shows how some small and seemingly very insignificant pages have inexplicably skyrocketed in rank because of just how effectively they answer very specific questions.

But What Does That Mean For Us?

This could have serious consequences for how we do SEO in the future. We may find as we go that “utility is king” should be our new motto, rather than the old “content is king”.

If relatively insignificant websites that are lightly linked can rank purely based on the fact that they’re awesome, we might finally be approaching the fabled and much-scoffed-at meritocratic search ranking system, and wouldn’t that be a delight. Of course, it’s incredibly unlikely that Google has managed to devise an ingenious formula that can actually determine the true value of every webpage on the internet, but this could be the first step in a bigger trend.

Let’s keep an eye on it, and let us know what you’ve seen from this update in the comments!

This post originally appeared on the Netrostar Blog