Statistical Process Control (SPC) is a way of using statistical methods and visual display of data to allow us to understand the variation in a process. By understanding the types of variation in the process we can make improvements to the process that we predict will lead to better outcomes. SPC can also then be used to see whether our predictions were correct.
The methods were developed by Walter Shewhart and W Edwards Deming (and others) throughout the first half of the twentieth century.
Measurements of all outcomes and processes will vary over time but variation is often hidden by current practices in healthcare management, where data is aggregated and presented over long time periods (e.g. by quarter). Plotting data continuously (weekly or monthly) can be very informative. If we do this we reveal the variation.
Some variation is due to designing care for specific groups of patients with different needs. We describe this as intended variation. In contrast unintended variation is due to differences in health care processes not connected with patients' different needs. It is the unintended variation that results in waste and harm, and commonly forms the focus for improvement.
Example of a measure showing unintended variation and then improvement
Unintended variation can occur for two reasons. It can either be due to random fluctuations due to chance, or because something has actually changed. Disentangling the causes of variation is very important in telling whether a process or its outcome is improving. Too often decisions in healthcare management are made without knowing whether changes in data are due to actions taken, or merely due to random chance.
The two causes of variation we need to consider are:
Common causes - those random causes that are inherent in the system (processes) over time, affect everyone working in the system, and affect all outcomes of the system.
Special causes - those non-random causes that are not part of the system (process or product) all the time, or do not affect everyone, but arise because of specific circumstances.
Statistical process control charts (run and Shewhart control charts) are a good way of separating out these two contributions. If the two sorts of variation are confused there may be a temptation to inappropriately react to random (common cause) variation, as if it were due to a special cause. This "tampering" may exacerbate the variation.
Run charts are graphs where a measure is plotted over time, often with a median also shown. Because they are simple to make and relatively straightforward to interpret, run charts are one of the most useful tools in quality improvement.
Run charts are a powerful tool for detecting non-random variation and so sufficient for most improvement projects. However they are not so sensitive in detecting special causes as a second type of chart named after Walter Shewhart, who did early work in industry to develop the methods.
Shewhart charts are also time-series charts but they show a mean rather than a median and also contain control limits. Control limits define the boundaries of expected random variation around the mean.
If only common cause variation is seen then we say that a process is stable. This makes it predictable (within a range of random variation) because if it was stable in the past we can be confident it will be stable in the future.
Even a stable and predictable process may not deliver acceptable performance. A process is capable if it is reproducibly delivering the required outcomes. Control charts can be used to determine if a process is capable.
If a process is stable (only showing common cause variation) but not capable, we need to make a change if we wish to see an improvement. If we keep on doing what we've always done we'll keep on getting what we've always got.
If we also detect special causes we need to investigate what might be happening and learn from them. Not all special cause variation is bad. If we see a special cause which shows good performance that might be something we'd want to investigate, with a view to testing as a change to the process.
If we seek to improve a process by making a change we are intending to introduce a special cause. However, if we see special causes that are causing poor performance we will want to remove them.
Because they are simple to make and relatively straightforward to interpret, run charts are one of the most useful tools in quality improvement.
They allow us to:
Display data to make process performance visible
Determine if a change resulted in improvement
Assess whether improved performance has been sustained
Run charts are line graphs where a measure is plotted over time, often with a median (the middle value of those plotted so that half are above and half are below) also shown. Changes made to a process are also often marked on the graph so that they can be connected with the impact on the process.
Example of an annotated run chart
If we have at least 10-12 data points on our graph, run charts can also be used to distinguish between random and non-random variation using four simple rules. Different versions of these rules are used in different places so you may encounter different sets in other books or papers you read. However within NHS Scotland we have standardised on those used in The Improvement Guide and the Health Care Data Guide and by the Institute for Healthcare Improvement and the Scottish Patient Safety Programme
Non-random variation can be recognised by looking for:
A shift: six or more consecutive data points either all above or below the median. Points on the median do not count towards or break a shift.
A trend: five or more consecutive data points that are either all increasing or decreasing in value. If two points are the same value ignore one when counting.
Too many or too few runs: a run is a consecutive series of data points above or below the median. As for shifts, do not count points on the median: a shift is a sort of run. If there are too many or too few runs (i.e. the median is crossed too many or too few times) that's a sign of non-random variation. You need to look up a statistical table (see Perla et al, 2011) to see what an appropriate number of runs to expect would be. An easy way to count the number of runs is to count the number of times the line connecting all the data points crosses the median and add one.
An astronomical data point: a data point that is clearly different from all others. This relies on judgement. Every data set has a highest and lowest. They won't necessarily be an astronomical data point. Different people looking at the same graph would be expected to recognise the same data point as astronomical (or not).
Since using these rules requires you to have 10-12 data points at least on your run chart it is important to collect data as frequently as possible. If you collect data only once a month that would be 10 months, if weekly, 10 weeks (2½ months). However this needs to be balanced with keeping denominators (the number of values contributing to each data point) for percentages (or rates) above ten or so to minimise random variation due to small sample size.
Run charts are a powerful tool for detecting non-random variation but they are not so sensitive in detecting special causes as a second type of Statistical Process Control (SPC) chart - Shewhart control charts. These are named after Walter Shewhart who did early work in industry to develop the methods.
Why control charts are more sensitive than run charts
Run charts use the middle value (median) and so the rules rely on addressing whether points are above or below that middle value. No account is taken of the relative distances from the median; only whether a value is above or below.
Shewhart control charts use the arithmetic mean as the centre line. The mean is calculated by taking all the values and dividing by the number of values (what we commonly think of as the average). Because the relative distances from the mean are taken into consideration, Shewhart charts are a more sensitive way of detecting whether observed variation is due to common or special causes.
Shewhart control charts also contain control limits. The control limits define the boundaries of expected common cause (random) variation around the mean.
If a process is stable (i.e. data points are randomly arranged within the control limits), Shewhart charts allow us to predict future performance. This allows us to calculate if the current process is capable of producing a desired result (i.e. achieve a numeric aim or target) or whether the process still needs to be improved or replaced.
Depending on the type of data being plotted - attributes (classification or counts) or variable (continuous); - and the purpose of analysis, different types of Shewhart control charts should be used. The most common are:
Data type | Common chart | Used for |
Classification data | P-chart | percentages |
Count data | C chart | counts |
U chart | rates | |
T-chart | days between rare events | |
Continuous data |
I chart (sometimes called X‑MR; MR=moving range) |
individual measureable data points [also activity data] |
X-Bar |
subgroups of data at same time point (with range or standard deviation chart alongside to show variation within subgroup) |
In order to plot accurate control limits you need 20-30 data points but for X-bar, P, C and U charts trial limits can be used with as few as 12 points. You should always plot a run chart first.
Special cause (non-random) variation is detected using a variation of 2 of the 4 rules used on run charts (the shifts and trends rules) with three extra ones that rely on the position of data points relative to the mean (centreline) and control limits.
Because of the increased sensitivity of control charts, shifts must be of eight or more and trends of six or more.
The formulae to calculate the control limits differ for each type of control chart so producing control charts requires specialist software and/or a skilled data analyst. The control limits are sometimes marked 3 sigma.
A useful specialist reference for analysts is the Health Care Data Guide. This discusses different types of control charts in great detail, including those suitable for measuring rare adverse events (T-charts - time between events; and G-charts - cases or procedures between adverse events). As process reliability increases these latter two charts become more useful, particularly in patient safety improvement projects.
Sometimes we wish to compare among sites (hospitals, health boards or countries), rather than across time. In such cases control charts (X-bar & S, P or U) with the locations (or subgroup/site sizes) on the horizontal axis, rather than time points, can be used. These are commonly ordered in ascending size of site (subgroup). The control limits are wider for small sites/subgroups and narrower for larger sites. This gives rise to plots named for their appearance: funnel plots.
The document was created: 26. 06. 2017 02:03:57
Source: http://web2.mendelu.cz/af_291_projekty2/vseo/