Skip to content

Headlines, Heuristics and Subtlety in Interpreting Connected Health Studies

March 28, 2016

Headlines1

We live in a headline/hyperlinked world.  A couple of years back, I learned through happenstance that my most popular blog posts all had catchy titles.  I’m pretty confident that people who read this blog do more than scan the titles, but there is so much information coming at us these days, it’s often difficult to get much beyond the headline.  Another phenomenon of information overload is that we naturally apply heuristics or short cuts in our thinking to avoid dealing with a high degree of complexity.  Let’s face it: it’s work to think!

In this context, I thought it would be worth talking about two recent headlines that seem to be set backs for the inexorable forward march of connected health.  These come in the form of peer reviewed studies, so our instinct is to pay close attention.

In fact, one comes from an undisputed leader in the field, Dr. Eric Topol.  His group recently published a paper where they examined the utility of a series of medical/health tracking devices as tools for health improvement in a cohort of folks with chronic illness.  In our parlance, they put a feedback loop into these patients’ lives.  It’s hard to say for sure from the study description, but it sounds like the intervention was mostly about giving patients insights from their own data.  I don’t see much in the paper about coaching, motivation, etc.

If it is true that the interactivity/coaching/motivation component was light, that may explain the lackluster results.  We find that the feedback loops alone are relatively weak motivators.  It is also possible that, because the sample included a mix of chronic illnesses, it would be harder to see a positive effect.  One principle of clinical trial design is to try to minimize all variables between the comparison groups, except the intervention.  Having a group with varying diseases makes it harder to say for sure that any effects (or lack of effects) were due to the intervention itself.

Headlines3

Dr. Topol is an experienced researcher and academician.  When they designed the study, I am confident they had the right intentions in mind.  My guess is they felt like they were studying the effect of mobile health and wearable technology on health (more on that at the end of the post). But you can see that, in retrospect, the likelihood of teasing out a positive effect was relatively low.

The other paper, from JAMA Internal Medicine, reported on a high profile trial for congestive heart failure, which involved using telemonitoring and a nurse call center intervention after discharge.  This trial included a large sample size and was published in a well-respected and well-read journal.  On initial reading, it was less clear to me why they did not see an effect.  I had to read thoroughly – way beyond the headline — to get an idea.  The authors, in the discussion section, provide several thoughtful possibilities.

One that jumps out to me is that the intervention was not integrated into the physician practices caring for the patients.  In our experience with CHF telemonitoring, it is crucial that the telemonitoring nurses have both access to the physician practices and the trust of the patients’ MDs.  Sometimes a simple medication change can prevent a readmission if administered in a timely manner. This requires speedy communication between the telemonitoring nurse and the prescribing physician.  If that connection can’t be made, the patient may wind up in the emergency room and the telemonitoring is for naught.

It is also fascinating that the authors point out that adherence to the intervention was only about 60%.  This reminds me of another high profile paper from 2010 that came to the conclusion that telemonitoring for CHF ‘doesn’t work.’  I blogged on that at the time, pointing out that their adherence rate was 50%.  In both cases, with such low adherence, it is not surprising that the effect was not noted.

In our heart failure program, adherence is close to 100%.  As a result, our readmission rate is consistently about 50% (both all cause and CHF related) and we showed that our intervention is correlated with a 40% improvement in mortality over six months.  The telemonitoring nurses from Partners HealthCare at Home cajole the patients in the most caring way and patients are therefore quite good at sending in their daily vitals.  If they don’t, the nurses call to find out why.  Our program is also tightly aligned with the patients’ referring practitioners.  I suspect these two features are important in explaining our outcomes.

A prime example of how these study headlines can derail the advancement of connected health, was captured in an email I received the other day from my good friend Chris Wasden.  Referring to the JAMA Internal Medicine study he said, “Our docs are using this research to indicate they should not waste their time on digital health.”

Perhaps a spirited discussion over some of these nuances may change some minds.

Headlines2

And that leads me again to the concept of headlines and heuristics.  How could ‘telemonitoring’ in CHF lead to such disparate results?  Is our work wrong? Spurious?  I don’t believe so.  Rather, I think we’ve collectively fallen into a trap of treating ‘mobile health’ and ‘telemonitoring’ as monolithic things when, as you can see, these interventions are designed quite differently.

I believe we are susceptible to this sort of confusion because of applying a heuristic. We are used to reading about clinical trials for new therapeutics or devices.  A chemical is a chemical and a device is a device.  In a pure setting, when applied to a uniform population of individuals, a chemical either has an effect or not.  Connected health interventions are multifaceted and complex. Thus the apparent contradiction that telemonitoring works in our hands but not in the recent JAMA Internal Medicine paper.

My conclusion is that the next phase of research in this area should move away from testing technologies. Instead, we should focus on teasing out those design aspects of interventions that predict intervention success. Now I think that’s a good headline!

I’ll start out by offering two hypotheses:

  1. 1. mHealth interventions that are separate and distinct from the patient’s ongoing care process are less likely to be successful than those that are integrated.
  2. If adherence to a program is low, it will not be successful. Early phase, pre-clinical trial testing of interventions should include work to fine tune design features that promote adherence. Chapter 8 of The Internet of Healthy Things offers some ideas on this.

As I said five years ago, I’m not sure intention to treat analysis is the right way to evaluate connected health interventions. If patients are non-adherent to the intervention, is it any surprise that they don’t respond? I’m having trouble wrapping my head around that one.

4 Comments leave one →
  1. Jim Reid PA, PMP permalink
    March 31, 2016 9:21 pm

    That’s what I like about this connected world. For every misleading hyperlink, incomplete report, and shallow summary, there are always one or two smart guys who will take the extra time to tease out the details and offer rational analyses to counter the headlines – and then make them efficiently, and at no cost, available to the masses. Joe, you are one of those guys! As usual your insights about these studies are spot on and all the while diplomatic. Thanks for leading again!

    Heuristics played a role in my review of these studies too, but perhaps one layer deeper. I did at least read the body of the reports, not just the horror inspiring headlines, but then quickly developed my own conclusions which put concerns about the potential impact of these studies to rest. In reality, the studies were effectively designed to fail. As Joe has stated here before, hindsight is always 20:20 and in this case there’s a specific reason. It has to do with the time required to design and conduct such studies, collect and analyze the data, and then get them published in prestigious journals. It’s essentially a fault of the academic research process. It just doesn’t work well in fast moving environments. As Joe states, “When they designed the study, I am confident they had the right intentions in mind.” Near as I can tell, the problem is that while they were conducting a formal study, in which design parameters are often fixed for the duration, many of us were learning valuable lessons in short cycle agile pilots, pivoting toward improvements as they were discovered, adding many of the missing components Joe already described, thus leapfrogging the design parameters of these studies which is why we got more encouraging outcomes. This is beginning to sound like an echo from a hundred prior posts about medicine being too fixed in tradition to keep up with today’s rate of change…. So I’ll stop here.

  2. April 1, 2016 8:05 am

    yes, there is that issue of long cycle times for clinical trials and academic publications. haven’t figured out a good way to get around that one yet.

  3. May 1, 2016 5:44 am

    Reblogged this on bigadata healthcare software solutions kathmandu nepal lava prasad kafle and commented:
    Headlines, Heuristics and Subtlety in Interpreting Connected Health Studies

Trackbacks

  1. Connected Health | rbV3.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: