#15 – Great Content Can’t DESERVE To Rank

David Brent Meme

“Produce great content”

That’s what we need to do, right? Content Marketing is the new black, and we need to be bloody good at it if we want our sites to rank.

Sean recently proposed that some SEO folk are waiting with baited breath and crossed fingers that Google’s next update will reward quality content that deserves to rank. I sincerely hope that people don’t actually believe such garbage, and would like to dig into why it could never actually happen.

For the purpose of this discussion we’ll consider only the content medium of copy, since that is the most basic form of content, and most easily digestible by our spidery friends.

How can Google go about detecting quality content?

It could run through various checks and comparisons:

  • Is this copy identical to another page?
  • Are there chunks of copy that are largely similar to other places online?
  • How is the text positioned on the page?
  • Does this copy occupy a central theme or does it contain semantic oddities?
  • Is this copy sound in terms of grammar, spelling and syntax?

Hmmm, so Google knows how to tell if content is duplicate, poorly structured or incoherent. Of course, we know this – it’s exactly what Panda has been hammering sites for over the last couple of years. Google attempts to surface better quality sites simply by removing the lower quality ones.

But can Google look at two unique blog posts written on the same subject and determine which is of the higher quality? I sincerely doubt it.

A more pertinent question might be: ‘can humans do it?’

The literary world

The best selling books of all time, according to Wikipedia, contain amongst their top 10 writers such as Tolkien, Dickens and Lewis. A fair shout, many would argue. Yet nestling in at the number 9 spot is none other than The Da Vinci Code, by Dan Brown. Those familiar with Mr Brown’s work will know that his use of language is…clumsy, at best. His writing is painful to read, and he is not without criticism, evidenced by this collection of his ‘worst sentences‘.


Other works that have achieved massive commercial success yet are notoriously badly written include 50 Shades of Grey and the Harry Potter series. I must admit to enjoying the travails of Master Potter, although the prose can’t half make you wince at times.

50 shades of shit

50 Shades of Shit – a Tumblr ‘fan’ blog

The thing is, having discussed my distaste for such poorly written works with quite a few people over the years, some people simply don’t notice that the writing is bad. Others don’t give a shit. Well educated, highly intelligent people no less. I know this makes me sound pompous and arrogant, but hopefully is getting my point across – everybody has a different definition of quality.

dwight schrute gay meme

Google MUST rely on external factors

Content is made to communicate meanings, and communication is an exchange of information. Content is only effective if it is successful in communicating its meanings to its audience. But the audience is not a robot.

Since every person has fundamentally different experiences, opinions and ideologies, it follows that they should disagree on ‘what is good.’

The notion that content can, in itself, deserve to rank is flawed.

If people can’t agree on content quality, then Google certainly can’t make that call algorithmically. And we’ve only considered text! Imagine Google trying to grade images qualitatively, never mind animation, music or videos. This is art not science.

Google must rely on good old external factors, such as links, social shares and author associations. And at the end of the day, Google doesn’t give a shit that a Dan Brown book reads like a soap opera on steroids – if their users want it then Google will give it to them.


  1. Reply
    Richard Fergie March 25, 2013

    Are you trying to say great content is not sufficient to rank or that it is not necessary (or neither/both)?

  2. Reply
    Patrick March 25, 2013

    Hi Richard,
    Yeah, I’m saying that ‘great’ content – alone – is not sufficient to rank. I’m saying that Google doesn’t take ‘greatness’ into is algorithm as a ranking factor. Of course, if content is targeted well, can communicate its message effectively and is well seeded (socially), it can gain both incoming links and social traction – these are the factors that would help it to rank, not the content itself.


  3. Reply
    Dustin Verburg March 25, 2013


    You make some good points. Especially that since Google’s mission is to be “the best search engine for users” (regardless of how true that might be sometimes)– if users want 50 Shades of Rainbow Dash and it’s written at a 6th grade level, that is what will be delivered. As long as there’s a demand for Pony Bondage fiction and it’s written in something that resembles “a language” I think it fits a web spider’s concept of ‘good content.’

    Now I have a terrible image in my mind and it’s your fault 🙁

    • Reply
      Patrick March 25, 2013

      Thanks for stopping by Dustin, sorry to hear you managed to put a terrible image into your own head.

      Your comment makes me think about censorship (more the Pony Bondage than Rainbow Dash). Specifically – if enough people ‘want’ to see child pornography, should Google serve it to them? If not, where do you draw the line?

      Note: Neither myself, Sean or Anthony condone child pornography. It is vile and despicable.

      • Reply
        Dustin Verburg March 25, 2013

        It is vile, exploitative, immoral and illegal. I think that’s where the line is drawn. People want illegal things all the time, and some laws are unjust… but that particular law is not unjust. I don’t know what Google’s official stance there is, but I assume they’re on the up and up about it.

        But it’s an interesting point, too… where is the line? I know where it is for me, but my line with ‘good content’ is different than someone else’s line.

        The internet is a dark fucking place, Patrick.

        • Reply
          Patrick March 25, 2013

          Certainly is. I was amazed to recently find out that sports betting is illegal in most US states. It is massive in the UK! And what the heck happened with online poker?? That’s how you get 22 year old ‘Northern Europeans’ (as Phil dickface Hellmuth would put it) coming over and winning your ‘World Series’.

          Would be interesting to know if Google have any plans regarding censorship. It is really difficult to even start. Imagine you banned child porn sites for instance. Then you’d end up with directories of child porn sites, so they get banned. Then you get directories of the directories… could you realistically ever actually stop it??

          • Reply
            Dustin Verburg March 25, 2013

            That’s a good point. I don’t think you can ever totally stop it, since those rabbit holes go down so deep. ‘Not Directly Listed on Google’ doesn’t mean ‘doesn’t exist,’ I would imagine.

            As for sports betting– it happens on a pretty large scale, even when it’s illegal. That’s one of those dumb laws that no one cares about. Or shouldn’t care about.

            Online poker is something I know nothing about, other than my housemate plays a Texas Hold ‘Em app on his phone for 15 minutes a day. I am terrible at gambling.

      • Reply
        Mike April 10, 2013

        You know, there are some interesting research projects done by various universities in the last 5 years or so. Some of these projects were allowed to use parts of Google data samples (text and images). Projects which include:
        – algorithms for novelty detection in texts and videos (and assign a rank accordingly);
        – algorithms for opinion/intent expression mining based on key-phrase extraction (also see the European SentiWord project);
        – text mining for automatic image tagging;
        – co-occurrence analysis based on predefined set of documents;
        – novelty or anomaly detection in texts and social media;

        What I’m trying to say is, I think Google is already on its way to learn to detect “greatness” indirectly. If we observe what they are doing, and what projects they participate in, we can speculate what they’re expecting to achieve: detect not only relevancy (on the hype right now) but also intent, sentiment, opinion, context and novelty of our content, and correctly align it with user generated queries (not keywords). Then they only need to learn to become better and better at detect, interpret and follow author’s and social media signals, to attribute the final “greatness” factor.

        Just as Matt Cutts said last month at SMX West: “Processing the social data won’t be the limitation. The challenges and limitations come down to noise or intent.”

  4. Reply
    IrishWonder March 25, 2013

    Great is such a subjective judgement… One man’s great is another man’s ugly. How do you factor that into the algo? This whole “quality content” talk sounds a lot like sheer PR on G’s part, no more

    • Reply
      Patrick March 25, 2013

      I am surprised to hear this from you, I’m sure I saw you driving the Content Marketing bandwagon?

  5. Reply
    Jared March 25, 2013

    Awesome shit dude. And I love the intercontinental TV references. Tweeted.

    • Reply
      Patrick March 25, 2013

      That quote from Dwight makes me laugh every time I read it. Ha! Thanks for appreciating the images. I got told off by Sean and Anthony for not including enough of them, so I tried my best.

  6. Reply
    John-Henry Scherck March 25, 2013

    Hi Patrick,

    Although I feel like you are going for more of a text based content approach in this post – I have taken to trying to create technically “great” online experiences. A lot of the time we think of “great” content as something that is cutting edge and meaningful, when really – it can be some dynamically generated content that triggers QDF and some rich snippet stars – like reviews. I know it might not be “great” the way we humans see it, the robots seem to love it 😉

    • Reply
      Patrick March 25, 2013

      I like your take on it JH. QDF is a great example of how Google can identify ‘great’ content – by identifying the topic of a piece and aligning it with search trends data, they can serve more current posts under the assumption that they are the most relevant.

      I certainly agree that we should be striving for great experiences over great content, as experience fundamentally concerns the user (which obviously still makes it subjective). Great user experience that the robots also love = large smiley faced win.

  7. Reply
    David Leonhardt March 26, 2013

    I agree 100 percent with the premise, but I will take it in a slightly different direction. The examples of poor writing that you provide are actually examples of poor use of the English language. At least theoretically, Google should be able to notice these.

    For example, that “remind” is transitive kind of smacks the reader in the face, but an algorithm could also pick up that transgression.However, it is not the quality of language use that is most important to Google. As long as it reads fairly well (above a certain threshold of acceptability), Google has no interest in how well the language is used.

    At that point, Google is more interested in the quality of the information. “Great content” might say that for a certain skin condition you need to use hand cream four to five times per day. But “great content” might also say that one should never use regular hand cream for that condition, but always use Nivea. But which conflicting piece of “great content” will be the authority according to Google? Will Google base it primarily on how well the article is written? This is a big problem, with a solution that is still a long way from maturity.

    • Reply
      Patrick March 26, 2013

      Thanks for your input David. I don’t agree that “remind” would smack the reader in the face – to some readers it would, but for lots it probably wouldn’t. Lots of people wouldn’t notice it, or wouldn’t be bothered by it and would simply enjoy the narrative.

      I can’t see how Google can even attempt to make any qualitative judgement of the information without external factors.

  8. Reply
    Mike Essex March 26, 2013

    Interesting points Patrick, although good grammar doesn’t necessarily make something “great” as that’s really based on the reader’s perception. There’s so many potholes with grammar and different usages that for the average reader it wouldn’t really be fair for search engines to penalise for good grammar vs crap grammar.

    However I do agree with the core focus of the post that “great content” won’t rank without anything to push it forward – links, social signals, a high AuthorRank etc as you suggest, so I do support your argument

    Should I be worried that you are reading my book? 😉

    • Reply
      Patrick March 26, 2013

      Thanks for stopping by Mike. Sure I didn’t say anywhere that good grammar made something great, did I??

      I only mentioned grammar as a suggestion that they might be able to identify particularly bad grammar as a signal that a site may be low quality.

      I don’t think you have anything to worry about, though I’m still only a couple of chapters in (have to wrestle the Nexus 7 off the Mrs!)

  9. Reply
    Rich Brooks April 2, 2013

    Hi Patrick

    Stumbled across your blog today and must say I am very entertained indeed. Keep up the good work, Sir!

    Certainly not wanting to nip at your enthusiasm, but thought I’d evolve the thought a bit further for the purposes of any newbies out there. I don’t like the thought of us suggesting that Google is pretty dumb in this regard… they aren’t – they are merely agnostic. They can tell a piece of viral content from an identical one of little merit very easily indeed.

    Google would love to understand content and appraise quality (but it would like to understand the ‘quality’ of the authors even more). It has no way of doing this, directly.

    Rather, it seeks to develop proxies for ‘quality’ factors – and these can be highly effective, but alas are not impervious to spam and manipulation at the hands of us SEOs.

    Two commonly-cited methods that Google uses to infer quality are:
    1. Bounce rate and dwell time, or, more accurately “Return to SERPS velocity” as I like to call it. simply put, you aren’t going to keep reading crap are you? you will dwell more on sites that provide good content and possibly get lost forever, never to turn back to big Gs SERPS again. (Don’t put GA on your site if your engagement is crap – btw)
    2. Social factors. Google doesn’t need to be able to appraise content qualitatively if a large sample of humans are doing this for them. If only the world wasn’t full of idiots logged into facebook, and how spammy social referencing can be in reality… Understand how this is open to abuse and exploit it. Note recent efforts by Facebook to weed out false accounts.

    That was it really… there are lots of ways that Google can (very accurately) infer quality, algorithmicly and to practical scale – those are my two faves.

    As Patrick says though, don’t get hung up on why your great content isn’t doing better and then sit there with paralysis… while you’re sat there dumbstruck, your competition just released 5 (very average) pieces, and they rank in favour of yours.

    Good luck!


    • Reply
      Patrick April 2, 2013

      Hi Rich,
      Thanks very much for your positive comments – good to have you stop by!

      I agree that Google can of course figure out that content is considered to be high quality (but that equally they can often be tricked into thinking this).

      Although, I was of the understanding that Google does not use metrics such as dwell time or bounce rate as ranking factors – and in particular I was sure that they don’t use GA data. But one can’t read everything so perhaps I am behind the times?


      • Reply
        Rich Brooks April 3, 2013

        My view on that is you can trust Google to be untrustworthy in this regard. Rest assured they use ALL the data that they have access to – why wouldn’t you?

        I’d tell you that I don’t use GA data to determine your rankings if it would get you to use the tool, and give me insight on where your rankings perhaps actually should be. There’s no point of law here.

        Much more worryingly is their further monopolisation of query data through secure search (https). I believe there’s a case for anti-trust action here… perhaps not so much owing to their dominance in data, but owing to the fact it causes a dead-weight welfare loss. It’s quite a bit harder for me to provide relevant content to a user if i don’t know what their original intent was (via the query term).

        Not even Google wins, they are simply spiting online marketers.

        I don’t think the competition commission will take too kindly to this in the long-run. I’m writing an article on this at the mo for my agency’s newsletter – shout me if you want on the mailing list.

        Keep up the great work Patrick.



        • Reply
          Patrick April 3, 2013

          I’ve heard of people incorrectly putting GA code on their site so that it fires twice – and mistakenly thinking they had a really low bounce rate. I’m sure SEOs would find similar ways to game the system if GA data was being considered for rankings – so my personal view is that they can’t be using it.

          But I certainly agree with you on the decreasing transparency on query terms – it doesn’t help anyone and there seem to be zero benefits.

  10. Reply
    David Eyterkourjerbs April 3, 2013

    Google can definitely understand depth. They’re very decent at understanding and identifying related topics covered within a single document as well the related topics that aren’t.

    At the query level it understands user intent by how people click on results or what they type next when they don’t get the result they need. Google definitely learns.

    Semantic analysis, whatever flavour they use, should let them learn/understand connections between topics, build ontologies or whatever the terminology is.

    A blog on cheese mentioning 2-3 related topics could be said to lack depth compared to a blog mentioning 4-5 related topics. A blog that’s highly linked to or with a lot of positive social signals could be treated as a benchmark or “training document” for similar topics.

    I doubt this is used as a ranking factor that’s dialled all the way up to 11 because it’s a big step for them. But Google needs to move away from links. I’m taking the questions at http://googlewebmastercentral.blogspot.co.uk/2011/05/more-guidance-on-building-high-quality.html at face value.

    Partially because it means we don’t have to compete with $5 Filipino copywriters and content generators on price.

Leave a Reply

Your email address will not be published. Required fields are marked *