Patient satisfaction surveys are flawed in many ways. Here are just a few.
Sampling is a huge problem. A description of why sampling is an issue can be found here. It’s a bit complex. To summarize, the validity of a survey is strongly related to the size of the sample and the rate of response of the survey. If you have a patient base of 1000 and elect to survey 500 of them and receive responses from 100, the sample is really only 10% [100/1000] of the population in question. This would result in an accuracy of 95% ± 20%. See chart.
Most patient satisfaction surveys include far fewer than 50% of the population in question and have even lower response rates than 20%.
Thankfully, I no longer directly participate in the quarterly hysteria that occurs when Press Ganey scores are received by hospitals. Press Ganey is a company that is hired by hospitals to perform patient satisfaction surveying. They send out small numbers of questionnaires and have a very low rate of response. In addition, they use only a five point scale as a basis for their ratings and report the results as percentiles. [Note: I am not a statistician, but I don’t think it is kosher to report a five point scale in percentiles ranging from 1 to 100.] Usually there are modest up and down variations in these scores which are almost never statistically significant, especially when you consider the margin of error of well over ± 20%. Upon receipt of lower scores, task forces are established, multiple meetings are held, policies are changed and staffs are browbeaten. Many times the scores improve on the next cycle and the task force is congratulated. Lost in the euphoria is the fact that there is a three-month lag between the institution of any policy changes and the receipt of the next group of survey responses. In other words, the policy changes probably were not the cause of the uptick in the scores.
Note that the Medicare Hospital Consumer Assessment of Healthcare Providers and Systems [HCAHPS] survey suffers from many of the ills of Press Ganey, such as small sample size, poor response rates and way too many questions.
Other issues with patient satisfaction scores include the following:
There is no correlation between patient satisfaction scores and complaints.
Surveys are more reliable if they are completed as close to the time of the encounter as possible. Most are not done that way.
They do not necessarily correlate with quality of care as is shown in papers involving medical patients and patients with heart attacks. Other thoughtful essays on this topic can be found here and here.
No doubt the facts will not deter the bean counters from mandating that all physicians survey patients for satisfaction no matter how meaningless the data may be. American Medical News recently reported that the AMA and Press Ganey will be happy to help you with this for a mere $65.00 per month for AMA members and $85.00 per month for non-members.
6 comments:
Hard to argue with most of this... and I do this kind of stuff for a living (market research, patient satisfaction surveys, etc.).
I think you're right to point out that the problem sometimes lies in the research, and sometimes in the reaction to it. We tend to design studies to meet very specific objectives, but our results can easily be taken out of context.
For instance, I might take issue with your statement that "surveys are more reliable if they are completed as close to the time of the encounter as possible." That's true if you're trying to get very specific details about an experience, but untrue if you're trying to measure the residual impact of that experience on your overall attitude toward the provider. The problem comes when you take a study designed for the latter, and use it for the former.
Anyway, nice job holding up the mirror so we can see our warts!
Tom, Thanks for the interesting comments. Someone on Twitter took issue with me and said that the survey would be reliable if the sample was representative of the population even if the response was small. He said that the population would be representative as follows:"normally by using 'probability sampling' - eg random sample. Can validate with demographics of respondents." When every patient is sent a survey and 5% respond, I doubt that could be considered random. I also don't think that Press Ganey is analyzing the demographics of the respondents.
Thanks, but I disagree.
Firstly, we need to distinguish between the survey and its administration. Poor response rates, poor sampling should not be taken to mean that the survey is without merit.
We also need to work out what it is that we are trying to measure. You state that satisfaction scores do not relate to complaints. Does that mean they are 'bogus'? No, complaints have more to do with patient communication than quality of (say) surgery.
Also, satisfaction with process is one thing, satisfaction with outcome is another.
In orthopaedics, there is a move towards including patient satisfaction as an outcome of major joint replacement surgery, instead of such outcomes as revision (needing another operation) or range of motion. Having a patient who is unhappy with the result of their surgery, but has not had another operation (for one of many reasons) is something worth knowing.
Some would argue that patient satisfaction with the outcome of a procedure is more important than any other outcome you can measure. After all, for elective surgery, isn't that what we are trying to achieve - a happy patient?
Dr. Skeptic, you have a right to disagree, but not only is the process of measuring satisfaction flawed, there is evidence that patient satisfaction does not correlate with outcomes. I disagree that patient satisfaction is the most important thing.
Unequivocally, "patient satisfaction" scores are bogus. As are "most wired hospital" rankings, "leadership" awards, and "most influential" determinations.
I also disagree that "patient satisfaction" is the most important thing. If that were the case, faith healers would have the highest scores of all, and we should just send everybody to them. Which is, of course, ridiculous. The vast majority of the supposedly scientifically-trained and objective physician cadre in the US (if not the world) has swallowed whole the nonsense of the management gurus and faddists, and things in general will only get worse until that changes.
Anonymous, thank you for commenting. I agree with everything you said.
Post a Comment
Note: Only a member of this blog may post a comment.