Sunday, September 7th 2008

"Better" Doctors Don't Mean Better Care


Wherein My MS Paint Skills Are Put To The Test

The more I read, the less I think the social scientists and policy wonks should have much to say about the delivery of healthcare in the United States.

True, I love policy debates, I love history, I love my amateurish attempts at economics. For this blog, I frequently cite and read the social sciences as if they were legitimate areas of study and had something meaningful to say.

But for the most part social science is a misnomer. Although guised up otherwise, things like sociology, political science, history, psychology, and even economics are typically not practiced as science but as philosophy.

I’ve been spurred to this piece by a blog post over at Overcoming Bias which is making its rounds across the social linking sites like Redditt and Digg. In the blog post Dr. Robin Hanson comments on a National Bureau of Economic Research working paper entitled, “Returns to Physician Human Capital: Analyzing Patients Randomized to Physician Teams.”

So I actually paid the five bucks and read the paper, but I want to first go over the fact that the paper is even drawing attention and what was said about it on Overcoming Bias,

[W]here data is silent I try to give medicine the benefit of the doubt, such as in assuming average values are higher than marginal values, and that top med school docs give more value than others. So I am shocked to report that in a randomized trial of 72,000 hospital stays by 30,000 patients, patients of top med school docs were no healthier

The question is where that expectation came from. No doubt, there are instances where very prestigious academic practices are sought out for specialty care or amongst the affluent.

But for the vast, vast, vast majority of care delivered in this country a prestigious name on a medical degree or a residency certificate means little. Certainly in terms of earning potential. Except in the fact that more prestigious schools send more students to competitive/higher earning specialties (which can be explained by the meritocracy of the system alone) and except in the fact that there are geographic differences in physician earnings, where you do medical school and where you train for residency have essentially no correlation to your future earnings as a physician.

You would think that if we, as a society, thought that “better” doctors delivered better care then the ultimate outcome would be in the way we reimbursed for healthcare. Alas.

My major point is that I’m not sure that this paper demonstrates anything that isn’t inherent to the understanding of the vast majority of Americans who make use of healthcare. For a former RWJF fellow to act so shocked at this kind’ve finding is a little surprising and may be an anecdotal demonstration of just how far non-healthcare professionals are from really understanding the non-quantifiable aspects of American healthcare.

I’m not sure why the paper was done or why anyone should be surprised by the results but on top of that the paper is full of flaws, mainly in its use of surrogate markers to try to define why group of resident physicians are “better” doctors.

The paper looks at a Veteran’s Affairs Hospital with teaching arrangements with two different internal medicine residency programs affiliated with two different medical schools. It looks at patient outcomes in patients treated by residents from the two different residencies. The paper considers one residency program considerably more prestigious, and its residents thus “better” doctors, than the other.

To probably most health care professionals the outcomes are not surprising. There are no outcome variations for the patients. Patients treated by the “better” doctors do not do better.

Amongst many faults in the paper are the surrogate measurements the authors try to use to define doctors of one of the residency programs as “better” than those of the other.

[T]he residency programs are affiliated with two different medical schools where the attending physicians that supervise and train the residents are faculty members. These medical schools differ in their rankings. Some years, the school affiliated with Program A is the top school in the nation when ranked by the incoming students’ MCAT scores, and it is always near the top. In comparison, the lower-ranked program that serves this VA hospital is near the median of medical schools.

Another commonly used measure to compare medical schools is funding from the
National Institutes of Health (NIH). This ranking identifies the major research-oriented
medical schools, again with some of the most prestigious schools near the top. The
medical school associated with Program A is again among the top schools in the U.S.,
whereas the lower-ranked program has an NIH funding level that is generally less than
three out of every four medical schools.

Second, each training program is affiliated with another teaching hospital in the
same city, in addition to the VA hospital. Program A’s “parent hospital” is ranked
among the top 10 hospitals in the country according the U.S. News and World Report
Honor Roll rankings of hospitals. Out of 15 specialties ranked by U.S. News, Program
A’s hospital is among the top 10 hospitals in the country for nearly half of them, and
among the top 20 in nearly all of them (U.S. News & World Report, 2007). Meanwhile,
Program B’s parent hospital is not a member of this Honor Roll overall or ranked among
the top hospitals in terms of subspecialties.

Many of these have essentially nothing to do with the quality of the residents at each program. There is one measurement they cite with some validity,

[T]he pass rate for Internal Medicine is close to 100% for the residents in Program A compared to a pass rate of approximately 85% for Program B (a rate that is in the bottom quartile of the 391 programs listed).

I want to know independent quality patient care measurements for individual residents working in the VA. I want to know AOA membership rates. I want to know Step board scores.

So there it stands. You don’t get better general care from more prestigious academic centers or from “better” doctors.

No one should be surprised by this. Nor do we currently, truly (despite the P4P move), compensate physicians based on academic performance measurements (board scores, prestige of training programs). This paper sheds essentially no new light on the delivery of care in the United States. Those who are surprised by its findings should continue to study health care and refrain from commenting on it until they have more experience.

Share/Bookmark