Benchmarking: I know with complete certainty it’s a pile of poo, but I’ve never actually seen any done. I’m not sure it ever actually happens.  Does it exist? Is it like coaches of tourists going to Loch Ness to see if they can see the Loch Ness monster? It doesn’t exist, they’re on a fools errand and convincing nobody, but still they do it because if you are Japanese or American and in the Scottish Highlands it’s expected.

I’ve worked in the public sector for years, all over it, and I’ve spent a lot of time making Excel spreadsheets of measures, of different but vaguely similar organisations, in ascending order, and marking off where the invisible pretend lines were that demarcated where “good” was seperated from “satisfactory” or first quartile performance became second quartile performance.

I KNOW it’s rotten, at heart, as it’s reeks of command and control flawed ideas. My most loathed is ranking of performance into “above” and “below” average that ignores the solid statistical fact that half of everything in the universe is above average and the other half is therefore…below average! God, 50% of creation  isn’t up to scratch! This is a scandal! Something must be done! I am running out of exclamation marks it makes me that mad!

I have seen an awful lot of comparison making but I wouldn’t call it benchmarking. Neither would Wikipedia. Here is what they say benchmarking is.

Benchmarking is the process of comparing one’s business processes and performance metrics to industry bests or best practices from other industries. Dimensions typically measured are quality, time and cost. In the process of benchmarking, management identifies the best firms in their industry, or in another industry where similar processes exist, and compare the results and processes of those studied (the “targets”) to one’s own results and processes. In this way, they learn how well the targets perform and, more importantly, the business processes that explain why these firms are successful.

Alarm bells may ring with certain words there, we are well into command and control land here. Targets, comparing, best practice. The lot. Put to one side the basic flaw in it, the thinking that lies behind benchmarking.

Instead, just imagine that it all works. That some fool (like me) downloads some measures, ranks them, then splits them into 4 equal parts and emails them to a manager or senior leader. That person looks at where they are on the list. What do they do if they are bottom quartile, what do they do if they are top quartile? What do they do if they are in the middle? Surely they must be doing something different depending on where they are, otherwise why are these tables in existence?

If they are top, surely this means that less is done than if they are bottom? If they are top, then is everything fine and tickety-boo, so no need for owt. Stop where you are, feet up, kettle on. You’ve earned it. It must be.

It must be, because bottom quartile means take your hands out your pockets, wipe that smile off your face and get down to work Sonny Jim you’ve got some catching up to do.

Then if the service is bottom quartile, presumably something is done. Don’t know what, but something. Then the measure improves. Hurray! We are now better. Let’s say this continues until  the measure is top quartile. Is in the top 25% of services in the country for that particular measure. What do they do now?

Well, if the action required from being bottom is work hard to get better, then surely, as said, the action required when top is to merely maintain. Not improve. If it were improve, then you’d be doing the same as if you were bottom quartile.

I am tying myself up in knots here with the sheer illogicality that where you are in some ranked list of arbitrary measures can in anyway form the basis of decision making. True continuous improvement means that you continually improve. That’s it. NO matter where you are, top middle bottom. Job 1 is to do the job, Job 2 is to improve the job. Nothing else matters.

NB Anyone at all statistically literate, yes I KNOW there are only 3 quartiles, not 4. Never 4, only 3. Any other public sector spod reading this, listen up. The value that separates the bottom 25% of values from the set of values comprising >25% to <50% is the first quartile or lower quartile.  There is a value that splits the bottom 50% of values from the top 50% of values, called the second quartile. The third quartile, or upper quartile, marks where the top 25% set of values begins. There are three quartiles, that separates 4 RANGES of data.

This entry was posted in all wrong, command and control, systems thinking, systemz comix, Uncategorized and tagged , . Bookmark the permalink.

4 Responses to see-saw

  1. Mark Baker says:

    I thought the point of benchmarking is to find out who is best and then go and see what it is they do that makes them the best. Industrial tourism.
    Clearly no one does this; because if they did, they’d find out that System Thinking is best and they’d all be doing it.


    • ThinkPurpose says:

      I reckon POSIWID could be useful here. A lot of benchmarking stops at the “find where you are in a ranked list” phase. Then it goes little beyond there, other than if the results are good onto posters in waiting rooms or the like. You can guess the impetus for this futile and ineffective activity as the Audit Commission considered it best practice and it would help raise scores if you could produce evidence of it. A ranked list can be printed off and shown to someone.

      You can’t inspect quality into a service but you can certainly inspect activity into one.


  2. Barry says:

    It’s also important to remember that “the best” are only “the best” at a given moment in time – when they’re being compared in a benchmarking process. Beyond that time they may easily have slipped to second best or worse. (Think of all those world class/best of breed firms that no longer exist or have fallen out of favour). The reason is simple: circumstances not only change but change at different rates and in a hugely different ways for different organisations.

    Benchmarking ties you to someone else’s history without any guarantee that what is perceived to have worked for them will work for you. Benchmarking cannot accommodate the variety of change. In short, people are trying to hit a moving target.

    It’s surely better to concentrate on making your own history by really understanding your own business and your customers and let other idiots try to equal your success by benchmarking.


  3. drumming manager says:

    as far as i can see the purpose of public sector benchmarking is based on the view that as customers have no choice about whether to use our services, the lack of competition will mean that we will all lazilly provide mediocre services – benchmarking (in the lets put all the data in the spreadsheet and rank in order version) is supposed to provide a substitute competition where we all race for the gold medal and those who lag behind are threatened with by Insectors or their own bosses (managerial or political) with the chop if they don’t buck up their ideas. Are there any empirical studies on whether this hypothesis works in practice – can councils or hospitals be like the Olympic racing track or swimming pool? I’m sceptical but you can see the attraction of this in western culture…. seriously, is there any (trustworthy and independent) evidence one way or the other ?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s