3 more things performance management GETS ALL WRONG

66327349

As Jay Z asserts, there is plenty wrong with performance management.

And here’s 3 more!


panto horse

Name  The reverse pantomime horse gambit

What is it? An organisation decides to do something, writes down a group of words and calls it an objective/goal/priority AND THEREFORE IT BECOMES AN IMPORTANT GROUP OF WORDS and only then decides to measure this objective/goal/priority.

So, why’s it bad then? If your organisation decided to do something, how did it decide to do it? What caused it? Some strategic away day retreat in the mountains? A political election campaign? A highly paid management consultancies recommendations? Whatever it is there’ll be a whole load of symbolism attached to the result, it will be important. An important collection of words that has captured the collective mindset and political balance of the people in charge, been through multiple polishes and sign offs and consultations. In short, not obtained from studying reality.

Ok, so first you decide what you are going to do, and second you try and find something out about it?

WAKE UP McFLY this is the wrong way round!
mcfly

If you don’t know how to measure it, how do you know why you are doing it in the first place? WHY are you doing it if you don’t know enough about it?
As with most things this is not a problem of performance management, it is a management thinking problem.
Coming up with objectives, plans etc is what managers do, then later it is other people who are tasked with measuring them. These are considered two separate tasks, because Command and Control says so.
The root cause of this problem is starting a change with “plan”. Don’t start with setting objectives, instead….

What should you do instead? Start all change with GET KNOWLEDGE. Don’t start with “what should we do then?“, start with “so, what’s happening then? Let’s go and find out”. Doing this means you start with authentic knowledge, gained at first hand, this gives you a much better chance of doing the right thing than starting with some awful SWOT and PESTLE session with a smarmy consultant man armed with a flipchart and a pen.
AND you’ve already got the beginnings of how you will measure any changes that take place cos you started with measures and measuring, not with talking and deciding.


Capture

Name: The “hey, we’re goddam measurement scientists” dichotomy.

What is it? When people get finickity over accuracy of a measure. Often debating fiercely over small percentage points in a performance measure. For example, two people arguing whether a measure is 45% or 48%.  The stated reason why the measure should be different could be “you’re using old data, the records have been updated” or “the new definition wouldn’t include those cases“, anything really, the reason isn’t important. Debating fiercely over the difference of a few small numbers because people want to be accurate, this is the problem.

So, why’s it bad then? There’s nothing wrong with wanting accuracy, wanting to know EXACTLY WHAT IS GOING ON is fantastic.
But not if your accuracy won’t change what people will do with your data.
If a manager will do X if a performance measure is 45%, but will instead do Y if it is 48% then it does matter what that measure is, because it will directly affect what people will do.
When they won’t directly lead to a different course of action it doesn’t matter if the number is a bit higher or a bit lower. Nothing will happen outside the room based on this higher or lower number.
Just like if you bake a cake and put salt in it instead of sugar, it doesn’t matter if you were never going to eat the cake anyway.
Measures are for using, not writing down.

What should you do instead? This problem is only really a problem in the silly world of traditional performance management, where you report numbers separately, each one in its own column in a spreadsheet, for comparing with something else, like a target or the previous month. This is what causes people to tussle over tiny amounts in measures, because these numbers are important for comparing with. So a tiny 3% difference might meet a target or be higher than last months figures, or 3% could be the vital difference between quartile 2 and top quartile ranking.
Not only are measures for using, not writing down, they’re not for comparing either. Be accurate, cos numbers matter, but they matter because they tell you about reality, not because they make someone more important than you happy or less sad anyway.


tumblr_o34ffl7nBv1su40qeo1_r1_500

Name: The “Contranym That Beats Them All“.

What is it? It is when a performance person confuses statistical significance with actual real world significance. She might say for example, that there has been a “significant” change in an indicator, generally one from a survey of customers/residents. They say this even when the change is a piddling tiny 3% change. ie not a significant change.

So, why’s it bad then? There is ALL THE DIFFERENCE IN THE WORLD between being confident that there has been a change in a measure, and being confident that that change shows that something “significant” has taken place. The word has been inherited from the world of statistics, where even there it has a bit of a reputation…

Statisticians get really picky about the definition of “statistical significance”, and use confusing jargon to build a complicated definition.
While it’s important to be clear on what statistical significance means technically, it’s just as important to be clear on what it means practically.[link]

“it means that 95% of the times that you do a thing, it probably won’t be down to chance….erm… something something null hypothesis?”

There’s no way on earth I’m going to broach defining what it actually means, mainly because my own understanding of it is as hazy as a hot day in Beijing.

The important thing isn’t how it should be used, it is how it’s accidently mis-used. Have a look here for proper experts talking in a down to earth and entertaining way about it.
Warning: You’ll LEARN STUFF.

The word “significant” is…well, significant. It implies importance and “YOU SHOULD LOOK HERE!”. The word is often used in performance reports in sentences like…

“Housing issues have increased significantly in importance for the residents of Greater Suburbia, 16% state there is a need for improvement compared with 12% the previous year”

This a difference of 4% between these numbers, showing that 84% thought one thing one year and 88% thought the same the other year. This is significant? Significant to me is forgetting to put my trousers on in the morning or being in a car crash on the way to work. It isn’t a tiny fluctuation in a set of large numbers, not unless that fluctuation is in something HUGELY important that will have a large effect due to the fluctuation, like my blood pressure or Bank of England interest rates.

What should you do instead? “never use the word significant”. Don’t take my word for it, it is what the Royal Society say, and they’re royal the oldest scientific academy in existence. The assumption that something is significant rests on so many other statistical assumptions, such as a normal distribution, the idea of a 95% confidence level actually means something etc These assumptions aren’t stated, we just say that something is “significant” when it reality it probably isn’t. So just don’t ever use the word unless something truly is.

Advertisements
This entry was posted in data, measures, public sector, statistics, systems thinking and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s