Train wreck? What train wreck? Obamacare is gonna be AWESOME!
By Peter Suderman
A major new study of the effects of Medicaid
published in the New England Journal of Medicine yesterday
found that the provision of “Medicaid coverage generated no
significant improvements in measured physical health outcomes in
the first 2 years.” That is not exactly great news for Obamacare,
which relies on Medicaid for roughly half of its health coverage
expansion. In response, some of the health law’s backers are
arguing that, well, we can’t be sure the study proves that Medicaid
has no health benefits in part because the sample size is small
enough to mean that the results are statistically underpowered. But
that’s not how the study’s initial results, which appeared far more
friendly to Medicaid, were reported and interpreted. Many of the
individuals who wrote about the study’s initial round of results,
released in July of 2011, were quick to tout the study’s robust
design, and the certainty of its conclusions.
Writing at Kaiser Health News, for example, The New
Republic’s Jonathan Cohn
declared that the study’s design “makes it unusually
significant.” A blog post published by the left-leaning Century
Foundation
announced that the study’s “findings were irrefutable.” Aaron
Carroll, an influential health policy blogger at The Incidental
Economist, emphasized the rigor of the study. “I’d like to
reiterate that this was a randomized controlled trial,” he
wrote. “An RCT is pretty much the best way to prove causality,
especially if it’s well done.” And because it’s an RCT, he
concluded, “we can even start talking causality.” Ezra Klein
published a column touting the study with the headline,
“Amazing Fact! Science Proves Health Insurance Works.” He explained
why the randomized study was so valuable: “The gold standard in
research is a study that randomly chooses who gets a new
treatment and who doesn’t. That way, you know your results are
unaffected by differences in the two populations you are
studying.”
Now, well, it’s all a little less clear. “The problem with the
Oregon study,” Klein
wrote this morning,” …is we don’t really know what we’re
learning.” Carroll, who was ready to start talking causality when
the first study was published, is now
counseling caution. “So chill, people. This is another piece of
evidence. It shows that some things improved for people who got
Medicaid. For others, changes weren’t statistically significant,
which isn’t the same thing as certainty of no effect. For still
others, the jury is still out.”
It’s notable that the findings from the first round of study
results were actually less robust than this week's
results. Not only did the first round only measure a single year,
there were no objective physical measures of health at all.
Instead, the researchers did find big improvements in self-reported
health. People who got Medicaid merely said they felt a
lot better. And about two-thirds of that self-reported improvement
appeared before any medical treatment had been obtained. Yet that
was enough for many Obamacare backers to declare that certain
victory was at hand. Indeed, despite the lack of objective
measures, it was even enough for many reports to declare that we
now had irrefutable evidence that Medicaid definitely does improve
health.
The White House blog, for example, headlined an item on the
first study “Health Insurance Leads to Healthier Americans.” An ABC
News
report on the study opened by saying that the study “proves
that being insured through Medicaid benefits people physically.
Health policy analyst Harold Pollack used the initial results to
ask, “Can conservatives please stop claiming that health
insurance doesn’t improve health?” Incidental Economist health
policy blogger Austin Frakt expressed his
confidence that "the research team will find that Medicaid does
lead to better health" while singling out the sturdiness of the
study's methodology and selection design for praise. A New York
Times
analysis concluded that "expanding insurance does not save
society money — as some advocates of preventive
medicine have
claimed — but it does appear to make people mentally and
physically healthier." Harvard health policy professor John
McDonough hailed the study, and dismissed those who counseled
caution about the study’s results. “Naysayers are already out in
force charging that the study results fail to identify actual
improvements in enrollees' health status,” he
wrote. “Those kinds of results are down the road.”
We’ve now gone down that road. But we didn’t find those kinds of
results. Not with the rigor that the study's authors deemed
necessary, anyway. Instead, on the objective health measures, we—or
rather the researchers behind the study—found
some improvement in objective health measures. But not enough
to rise to the level of statistical significance. Not enough to
know with high confidence that Medicaid was the cause. This is not
nothing. It's even potentially interesting. But it is far from
definitive proof, or even just a strong reason to suspect, that
Medicaid actually makes a measurable difference in objective health
outcomes.
And while some on the left are still claiming victory—in part
because of the objective health measures and in part because the
study showed decreased risk of health-related financial
catastrophe, and decreased probability of screening positive for
depression, the particulars of the study's results should at the
very least complicate their arguments.
If the primary goal of a program like Medicaid is to protect
individuals from financial shocks associated with medical expenses,
then why not support a far, far cheaper subsidized catastrophic
insurance program instead of low-deductible insurance through
Medicaid? If what the poor really need is financial protection,
rather than health services, then why not just give them cash?
The depression results are unusual as well, because the study
found no concurrent rise in the use of medication for depression.
Might some of the difference be attributable to the fact that
Medicaid beneficiaries had won the health insurance lottery and, as
we know, felt better because of it?
As for the too-small-to-be-statistically-significant
improvements on objective measures—even if we could be confident
that the improvements were attributable to the provision of
Medicaid, would those improvements be worth the high price of the
program, both in its current form and its planned expansion under
Obamacare? Medicaid currently
costs the federal government about $250 billion a year, a
figure that's projected to rise past $570 billion over the next
decade. That doesn't count the hundreds of billions that state
governments also spend on the program. (Health costs, many of which
are related to Medicaid, are the biggest cause of budget
trouble for states, according to the Government Accountability
Office). And for that, beneficiaries are getting health benefits
that are, at best, highly uncertain.
For what it’s worth, I am glad to see that liberal health wonks
are now preaching caution. Given the study’s results, they are
right to do so. I wish, however, that they had done so from the
outset, and I hope that they will adopt a less confident approach
in the future. The second-round results of Oregon’s experiment with
Medicaid suggest that it’s possible that Medicaid may have some
improvement on a few health measures. But further study could just
as easily show that they don’t, and that the improvements found
here are little more than statistical noise. Science, in other
words, has not really proven that Medicaid works, or that it
doesn’t. But it has strongly suggested, in a gold-standard study,
that on objective measures of physical health, those with coverage
through the program may not be much better off than those without.
No comments:
Post a Comment