Readability Tests: Magic Formula Or Recipe For Disaster?

Readability Tests: Magic Formula Or Recipe For Disaster?

A robot reads a long script and gives the thumbs up with the words: 83.6% Readable.
Summary:

Automated readability tests give an indication of whether your text is easy to understand. But just how useful are they? Content strategist Matt Fenwick explores the pros, the cons, and reveals how he uses them in his business.

Readability tests promise so much. Just take a sample of your text, go to a free online tool, paste the text in, and out comes a number showing how easy your text is to understand.

If only it were that simple.

Readability tests have copped a lot of flack. Critics say that they’re far too simplistic to accurately predict how an actual reader will respond to your text.

I completely agree. And I use readability tests all the time.

I’m going to run through the limitations of these tests. Then I’ll show how, if you keep these limitations in mind, the tests can be immensely useful.

What are readability tests?

Readability tests have the same basic approach. They count the number of syllables, words or sentences in your text, then find the ratio between these elements. This formula generates a number, which you compare against a standard to determine how readable your text is.

I’ll go through one test to show you how they work.

The Gunning Fog Index aims to show how many years of formal education a reader would need to understand a text. The Index takes the average number of words per sentence, adds the percentage of complex words (words with three or more syllables), then multiplies the result by 0.4.

If the number is 12, that means someone would need 12 years of formal education to understand the text: they finished High School.

There are many other tests out there. The Fleisch-Kincaid Index is also popular—partly because it’s built into Microsoft Word.

Criticisms of readability tests

As the old saying goes, if something seems too good to be true, then it probably is. Critics say that you can’t use maths to predict how easy a text is to understand. A common criticism is that the number of syllables doesn’t always predict readability.

Here are two examples.

  1. “Do you accede” v “Do you agree?

    Both words have the same number of syllables. But because we use ‘agree’ far more often, the second sentence is easier to understand.

  2. “I stared at the television” v “I stared at the quincunx.”

    Here, ‘Television’ actually has more syllables, and so, would make a text harder to understand according to most readability formulas. But ‘television’ is understood by everyone over the age of two, while not many people need a Latin word for a group of five objects (file ‘quincunx’ under ‘may be useful someday’).

RELATED:  Empathy: The Thread That Binds Us

A further criticism is that the readability tests don’t tell you if a sentence makes sense. Take this example:

“This tree is jam.”

The words are simple, so the sentence would score well on a readability test. But it makes no sense whatsoever.

Why I still love readability tests

I use readability tests when I want a quick indication of how readable a chunk of text is—or when a numerical measure will appeal to stakeholders. Even taking the criticisms above on board, we can say that readability tests have some predictive value: if a readability test shows that content has problems, this will often be true.

There’s a reason why convoluted writing is engrained: professional people are used to writing this way. Simply hearing a writer’s opinion is often not enough to convince them to alter the habits of years—decades even. And it’s hard to have strategic conversations with senior executives about the state of communication in their organisation by combing through each sentence in a document.

That’s where readability tests come in. Because they generate numbers, you can aggregate data. For example, when I tested a 2,000 page website recently, I could say that: “Your target audience will find it difficult to understand 80% of your pages.” This then feeds into decisions about how much work is needed to bring the content up to scratch.

I’ve also used these tests with small-business owners as a starting point for conversations about how they can make their web content clearer, and more concrete.

I would only ever use readability tools as a diagnostic tool for content as a whole. The tools aren’t sophisticated enough for sentence-by-sentence analysis—or to be a checkpoint when clearing document.

The gold standard will always be testing the content with actual users. But if you need a rough picture, then readability tests are a useful tool for your kit.

Read on

Written by
Matt Fenwick
Join the discussion

1 comment

Follow @uxmastery

Instagram has returned empty data. Please authorize your Instagram account in the plugin settings .

Instagram

Instagram has returned empty data. Please authorize your Instagram account in the plugin settings .