At work, people are using more and more confidence intervals or funnel plots in their data as part of their decision making processes. Which is great!

However I think that putting them on a chart can lull people into a false sense of security. Those little lines can legitimise the labelling of points of data as outliers. People take action and make decisions based on it, so they can be dangerous. It's important to get it right, and often assumptions are made and analysis copied without much thought.

Here are the 3 of the most common mistakes I've seen:

**Mistake #1: Forgetting that 95% confidence means that 5% will be outside the limits, down to chance**Exactly that. Out of 100 confidence intervals (at 95%) , you will expect around 5 to be outside them.

This image shows 100 simulated measurements, displayed as dots, taken from a process with a known mean (blue line in the centre).

The confidence intervals to 95% are shown for each measurement - black lines

As you can see, 5 out of the 100 measurements do not have a confidence interval that straddles the actual mean (red squares).

The same principal can also be applied when you are performance managing, or trying to highlight exceptions. If you apply a straight 95% interval to 100 groups performance, you could, by chance be targetting 5 people as poor performers but incorrectly.

The solution is to apply what is called a Bonferonni Correction. Don't worry - the title is much more grand than what it actually involves. All it does is make your confidence interval more accurate by a factor of n. n being the number of items on your list

IE if we are aiming for an overall confidence of

*1-a*, we should adjust each confidence interval to

*1-(a*

*÷ n)*

So if you wish to compare 5 items at 95% confidence, then you really should use

*1-(0.05 ÷5) = 1-0.01 = 0.99 == 99%*

confidence intervals

__Mistake #2: Using the data itself to calculate the limits (particularly on sparse data)__I often see people take the SD of the data to calculate the intervals, which is probably the simplest way to do it. Although this has its benefits, there are many ways to calculate the limits, all depending on the use and type of data used.

Often, if there isn't much data, the intervals might be too wide to be of any use.

It may also hide true variation in the data. What if we compare two fictional charts of some process, one with limits calculated from itself, one with limits calculated from the Poisson Distribution:

In the first chart, the large standard deviations hide the wild behaiviour of the variable we are measuring. When we add the limits according to what it should look like if the variation was purely statistical then that adds another element to the conclusions we can draw.

However not all data can be modelled in this way. In the real world, with its many interactions you don't always get scenarios that can be neatly modelled using your usual distributions. In this case using an inapt model would also lead to incorrect conclusions.

**Mistake #3: Using weighted rates to calculate the confidence intervals**When you are using weighted rates, and calculating intervals using a distribution to estimate the SD, ideally you should calculate the intervals using the

*original*population. Then convert it back to the weighted rate.

__Simple worked example (but an extreme case using small numbers):__

Here I am using a simple binomial interval to work out the limits from the % of population

*Practice A has 10 patients, but a weighted population of 20*

*5 are diagnosed with Asthma. This is 50% of the population, but ony 20% of the weighted population*

*Estimated intervals using weighted population = 19%*

*Confidence, in numbers using actual population to calculate limits = 3.1*

*3.1 / weighted population = 15%*

Although this is an ideal, if the difference between the actual and weighted populations are small, the difference isn't that great. Always best to test first, to see what sort of an effect it may have.

Erica :)

## No comments:

## Post a Comment