Skip to content

Emotional Automation Revisited

February 22, 2011

Last week we all watched in awe as the IBM computer, Watson, trounced two of Jeopardy’s finest.  This event has been much heralded but it is worth stopping for just a minute to reflect on the experience of watching Jeopardy those three nights.  I had no trouble rooting for Watson, feeling disappointed or embarrassed when he missed a question and chuckling when he displayed any behavior that seemed the least bit human.  I knew the whole time, on one level, that Watson is a computer.  On another level though, I bonded with him and felt a good deal of emotion regarding his success.

MIT Prof. Sherry Turkle recently released a book entitled Alone Together.  She was also interviewed recently on TechCrunch.  Turkle puts forth the view that technology is a poor substitute for interaction with a human being. However, she notes that when technologies (robots, relational agents and the like) respond to us, they push “Darwinian buttons,” prompting us to create a mental construct that we are interacting with a sentient being.  This brings a host of emotions to the communication including affection.  Turkle makes an argument that in the realm of human relationships this phenomenon is unhealthy for our species. 

I’d like to bring in principles from behavioral psychologist, Robert Cialdini, who has authored several books on the psychology of persuasion.  Cialdini offers simple tools that can be used in everyday life to persuade others to adopt one’s point of view.  In doing so, he lays out solid experimental evidence that these tools are effective, in most cases without the recipient being aware.  The six are: 

  • Reciprocation  (we feel obligated to return favors performed for us)
  • Authority (we look to experts to show us the way)
  • Commitment/consistency  (we want to act consistently with our commitments and values)
  • Scarcity (the less available the resource, the more we want it)
  • Liking (the more we like people, the more we want to say yes to them)
  • Social proof  (we look to what others do to guide our behavior)

Before I combine the thinking of these two individuals, remember: there is a supply and demand problem in healthcare delivery.  In the U.S., we have 24 million people with diabetes and their ranks grow at 8% per year.  One in three people over the age of 20 has high blood pressure and one in 10 over 65 has congestive heart failure.  As it stands in 2011, we already have both physician and nurse shortages.  We must be looking for ways to deliver care that do not involve intense one-on-one relationships with healthcare providers, at least for every healthcare decision that is made.

So here is my idea.  Can we take advantage of some of those “Darwinian buttons” that deceive us into believing we’re interacting with a person rather than a technology and combine them with some of Cialdini’s persuasive techniques with the hope of delivering compelling, motivational health-related messaging to individuals? 

We have evidence that this approach can be effective.  In a study conducted at the Center for Connected Health where we measured adherence to activity goals, individuals who had three times/week meetings with a computerized relational agent (courtesy of Tim Bickmore, Northeastern  University) had almost three times the adherence to their step count goals as did those in a control group.

My question is:  how unhealthy is it for us to leverage the powers of technologies such as robots to ‘push Darwinian buttons’ in the context of healthcare where we already don’t have enough providers to go around and evidence is strong that the situation will get much worse.  Can we spread our limited provider resources over more patient demand by using these technologies? Will we feel cared for and have better outcomes?

I believe that with properly architected systems, these goals can be met.

23 Comments leave one →
  1. Susan Lane permalink
    February 22, 2011 4:48 pm

    I heard Sherry Turkle’s interview – interesting – however so many in nursing homes have no one to relate to that it seems that even a ‘robot’ is better than nothing – reminds me of how we cathect to toys or blankets as very young children and feel safe and comforted with them.

  2. Beverly Burke permalink
    February 23, 2011 11:51 am

    I’m an RN and public health professional, so my comments should be viewed through those lenses. Digital healthcare is here now and can only expand. The question is how? Can digital devices improve the health care of the individual or population AND improve the working conditions of the health care workers, primarily nurses and primary care docs? The way is in sound planning, by that I mean where does the digital information end up? If it is going to end up in the head of a human end user then had the plan included the capacity of the end user to synthesize the information or, if in the rush to get devices to market have we just created a new SHINY burnout stream for the end user?
    Regarding nurse and physician shortages, all would be well advised to look beyond the superficial sheath of digital technologies to consider the factors around the shortages. Nurses have had little support for improved working conditions from their own management, including national associations like the ANA and Director of Nursing services in institutions. Improved conditions mean lifting equipment, improved schedules, no 10 hour shifts etc. In physician primary care improved conditions include workable patient schedules i.e. no more than 15 patients per day, and no bottlenecking of orders, imaging and lab evaluations coming from more than one stream of information. If a primary care doc has 15 diabetics s/he sees in one day that is an enormous amount of information and behavioral management to coordinate with information coming from one source, the medical record. If s/he is expected to field additional information from patients sending digital information to the record/office that could be untenable.

    • February 23, 2011 5:01 pm

      Very thoughtful commentary. I would always agree that systems and devices must be thoughtfully implemented and that end users can not be overwhelmed. I guess I’d be a bit more likely to design the systems to be most usable for the patients first and worry less about providers. As to the challenges you raise on shortages, they are real challenges indeed. However, we can’t train enough people to keep up with demand no matter how we tweak the other variables.

  3. February 23, 2011 4:05 pm

    In a thought-provoking post, Joe Kvedar has discussed not only Robert Cialdini, who is happy to instruct health care types (and also Madison Avenue types) in the uses of technology for persuasion, but also Sherry Turkle, who worries that technology is getting all too good at engaging attention, redirecting emotion, and otherwise playing the persuasion game.

    If the goal is to improve health, is there nevertheless a problem, Joe wonders, with people in authority leveraging technology that increasingly taps basic human instincts?

    As inherently interesting and potentially effective as new technologies are, without doubt the answer to the question is why, yes, there could be a problem.

    As psychologists, neuroscientists and behavioral economists get better at what they do and gain greater sway over the design of patient compliance programs, there will be wonderful improvements in health at times — and worrisome dangers of undue manipulation at other times.

    Distinguishing the one situation from the other will not always be easy. Evidence-based medicine may be of little help because the state of the evidence will often be so imperfect. The pharma industry and other interests will at times have undue influence over physician advisory panels. In areas such as mental health, diagnoses and treatment options can be subject to dispute. The issue in these cases won’t be whether patients can be persuaded to comply. The issue will be compliance with what.

    And these are just the merits of the matter. The politics of the matter may travel a track of their own. As technologists, providers and public health officials get creative with nudges, choice architectures, and persuasive technologies, watch for pushback from both the left and the right, among those who see Big Brother and the Nanny State extending too far into the domains of personal choice and informed consent.

    Before we ever reach such a point, it would be wonderful to see, from a champion or two of these intriguing approaches, an intensified effort to develop a new ethics of compliance to go along with the new technologies of compliance.

    • February 23, 2011 4:57 pm

      Thanks Mike, for your articulate insights. I agree that before we invoke images of Blade Runner and our earth over run with Replicants, that it would be interesting to get a glimpse of this future and if indeed it can some of the burden we now place on health care workers. Tim Bickmore’s work ( seems to be promising in this regard.

  4. Catherine Yanda permalink
    February 28, 2011 5:42 pm

    I agree that leveraging technology for better outcomes may become a bright spot in healthcare delivery if it is designed and managed as a direct connection between medical providers and patients. This includes doing a SWOT analysis on the effectiveness of the HMO case manager; and possibly dismantling the system as technology progresses. The cost of maintaining gatekeeper and intermediaries that were put in place primarily by practice managers and the insurance industry to regulate cost by limiting access made sense on paper, but not when it fails to demonstrate meaningful improvement in outcomes and quality of care.

    One potential replacement for some of this layer is to enlist and connect family members responsible for care using technology. Although they are classed as “informal” caregivers, leveraging the observations, and skill-set of family member / caregivers into the electronic “medical home” should not be overlooked. The emotional connection and direct communication in this case would be tempered with a desire for good outcome and advanced learning. It may also have the effect of being able to identify and offer training / future career development to a self-selected group of people who can in return for their selfless investment expect to transition into a healthcare occupation at some point. Never underestimate the power of collaborating with family caregivers when it comes to technology.

    • February 28, 2011 8:23 pm

      I applaud the idea that we might use family members as part of a ‘provider substitution’ strategy. The more creative solutions such as this we can implement, the better

  5. February 28, 2011 5:54 pm

    I believe you are talking about the potential for large scale automation of health messaging but isn’t receptivity dependent on individuals and circumstances?

    In my Social Media and Health course at Tufts School of Medicine we looked at the phenomenon of health eCards for Valentine’s Day. Some were focused on donations (if you “like” Macy’s they sent a Valentine’s eCard and donated $1 to Go Red for Women) and others on health messages (in essence: I love you, quit smoking). While repeated exposure to health messages can lead to behavior changes, my students uniformly agreed that there has to be an appropriateness to their positioning and Valentine’s Day cards, electronic or paper, were not the right place except in rare circumstances. I think your “hope of delivering compelling, motivational health-related messaging to individuals” needs to similarly be tailored to individuals and circumstances to increase receptivity.

    • February 28, 2011 8:25 pm

      I completely agree, Lisa, that customization is key to engagement and behavior change. Thanks for offering your wisdom by way of a comment

  6. February 28, 2011 7:08 pm

    It is inevitable (so long as we have a stable economy) that technology will play an increasing role in healthcare. The demand is there, so the technology will follow.

    One of the worst facets of healthcare (generally) is its near-total ambivalence to behaviors and their intrinsic and social drivers. The market is rewarding creative games, wellness tools, social networking apps, devices, incentive systems, autoresponders etc… Their efficacy is measurable.

    Technology is a facilitator. It is a means, not an end. So robotic healers will be healthful if people design them to be so, and if they prove to be so. Like pharma products, they will be sold/lobbied hard, and some will inevitably prove harmful — but markets and non-market factors (regulation) will make them more healthful than we humans can be without their speed, scale, silliness and other aid.

  7. Graham Creasey permalink
    February 28, 2011 10:56 pm

    No need to “deceive us into believing we’re interacting with a person”. (In fact that would probably be counter-productive when people realized they had been deceived, which is not a healthy personal relationship.) We interact with both persons and impersonal systems as needed – nobody minds using an ATM instead of a person to get cash – so use whichever works. Even machines are intermediaries programmed by people – just don’t confuse them and deceive people.

    • March 1, 2011 9:04 pm

      Deceive may be the wrong word. I borrowed that one from Sherry Turkle. What I mean to say is that if you gave people a choice of meeting with a person or an ATM (in the late 50’s and early 60’s) I am pretty sure they would have voted for a person. Now we prefer the ATM. In healthcare, we have to get folks to the point where they are willing to accept that they can feel cared for by a technology. that is what i meant

  8. March 1, 2011 8:27 am

    Many, many people cannot get the help they need from health
    professionals or caregivers or family members, and this is not
    going to change – if anything, the trends are towards diminishing
    access to care, per Joe’s note. The status quo involves many
    (most?) of our isolated elders being “cared for” by their TV sets,
    and poor health behavior driving the majority of our health care costs
    in large part because getting to a trainer or health counselor or doctor
    is too inconvenient, too expensive, or stigmatizing. Many of those who
    do get help from others often receive indifferent, incompetent (25% of
    inpatients receive unnecessary or harmful treatments), neglectful or
    even abusive “care” (ie, elder abuse).

    So, we can either try to design our way out of this situation,
    or just whine about it. When asked to compare your idealistic image
    of an ever-present, compassionate human caregiver with the toy examples
    of companion robots Turkle describes in her book, of course we would
    all want the former for our grandmothers. But this elder haven
    just doesn’t (and won’t) exist for the vast majority of us (can you
    spend 24×7 attending to your grandmother’s every need?). Turkle
    says that long-term interactions with these toys, to the exclusion
    of interactions with other humans, can be mentally unhealthy and lead
    to depression. Sure. But, this is a design problem. How do we
    build companion robots that maintain (or actually improve?) mental
    health, perceived social support, and social network support, and
    do it with such grace that we don’t feel guilty and embarrassed
    about leaving grandma alone with them? Of course, one of the most
    important functions of these robots should be promoting the
    establishment and maintenance of human relationships
    for our elders. The alternative is not a compassionate caregiver,
    it’s a TV set.

    The case for the use of persuasive technology in behavioral medicine
    is much easier and far more compelling. Most doctors whom I’ve spoken
    with would not hesitate using persuasion or deception to get their
    patients to adhere to a therapeutic regimen that was essential for their
    health. We debated this a little at an AI in Eldercare workshop
    a few years ago (attended by Turkle and several ethicists). As
    long as, 1) the patient is free to decide whether to use the
    technology or not; 2) they are free to decide whether to follow
    the system’s recommendations or not; 3) the role of the system is
    primarily to provide information to enable the patient to
    make informed choices; and 4) appropriate amounts of
    persuasion are used by the system only in cases in which
    the user is clearly making an unhealthy decision, then there
    are no ethical issues. Could the use of these technologies
    lead to isolation and depression? Of course, if they are poorly
    designed. Properly designed, these technologies can not only help
    people stay on their diets, exercise, and take their medications,
    but actually enhance their mental health and promote vibrant
    social lives.

    (some further thoughts at: )

    • March 1, 2011 9:06 pm

      Thanks so much, Tim, for the lengthy, valuable commentary

  9. Bryce Williams permalink
    March 1, 2011 2:34 pm


    Great thinking. From my point of view there is much to be gained by augmenting, not replacing, personal interaction with virtual interfaces. Aligning an individual’s technology preferences with our natural human traits that either support or detract from meeting our health goals seems like a win-win. For all of you who are runners you may already have seen/used the Nike+ platform that merges wireless tracking, social media and self-designed avatars (minis) together. Sure, my mini’s not real but I still appreciate the encouragment and/or smack talk based on how diligent I’ve been with my running of late. If you haven’t already done so, check out Jane McGonigal’s book “Reality Is Broken” for a great take on game theory and potential positive application to wellbeing.

    • March 1, 2011 9:08 pm

      Nice to hear from you, Bryce. thanks for the tip. I’ll add it to my reading list on this topic.

  10. March 1, 2011 5:08 pm

    A crucial component of healthcare reform is to transform our care delivery system to improve quality and control costs. And a significant part of this push is technology.

    The power of automation, through technology that addresses population health management, can enable a highly personal and customized interaction. The technology helps identify what needs to be done for which patient, allowing care managers to improve upon their treatment plan and boost quality of care. Technology doesn’t replace human interaction in healthcare, but we need to automate what can be done and extend the reach of the provider / care team outside the office.

    Simply identifying patients who have gaps in care is not enough. Unless an organization has a means to effectively reach out to patients with gaps in care, those patients will at best delay treatment, and at worst not seek treatment at all.

    Here at Phytel, we have shown through real customer case studies that a technology-driven automated population health programs enable providers to expand the reach of their services beyond the standard office visit. It is also a primary tool in helping to empower patients to stay on their care plans and prevent unnecessary illnesses or those which build upon a current one. As such, it is an important tool for managing the health of a population.

    Stratifying patient populations and using technology to make the best and most efficient use of our resources can enhance our care delivery models and improve the health of patients.

    Joe, in answer to your questions:
    Can we spread our limited provider resources over more patient demand by using these technologies? Will we feel cared for and have better outcomes?

    I say, “Yes.”

  11. March 2, 2011 8:00 am

    Please note that I am not in the “medical world”.
    There have been a lot of comments posted so far that are very interesting and they certainly pose thoughts for future applications.

    There will be many applications that will be deployed in the near future – whether they are robots interfacing with people, software applications on PCs or on the internet, or devices that people will use to monitor their personal symptoms, wellness programs, etc. Someday this data will be sent via satellite to databases where it then will be stored and analyzed when necessary.

    My thoughts are NOT related to the interaction of people and robots. I will let the experts and the users determine this… My thoughts concern how to best use all the data that is generated.

    Applications will be needed to either aid, or replace, doctors in certain situations (such as in rural areas, or under-developed countries). Even now certain pathology related applications require doctors to read and interpret complicated test results in order to determine a diagnosis, or to specify additional tests.

    Besides having fewer medical providers available, there is a need for more and more complex, non-linear data analysis. Often there is too much data for humans to handle now; and it is only going to get worse as more and more Baby Boomers (and others) will need diagnostic and prognostic help.

    On a similar note there is the subject of patient safety. Hospitals should use applications now to reduce their safety issues and their insurance costs. An automated second opinion could significantly reduce health care costs and provide a better level of service in the US.

    This type of technology could be used in robots, devices, and software applications. It would allow subject matter experts (practitioners) to define their reasoning and create solutions that can handle complex, interrelated (non-linear) symptoms.

    This would be a policy-based system, rather than a rule-based system. The decisions and actions that are controlled by the policies would be 100% explainable and auditable. Policies could be validated by academia or the government, and perhaps avoid some decisions that might be profit motivated. Auditability is a necessary attribute for Medical Informatics.

    The medical industry may resist, however. Most often we encounter practitioners who resist the technology because they are afraid they will be replaced by this technology. {Perhaps this subject can be addressed in another blog!}

    • March 2, 2011 6:17 pm

      very thoughtful. thanks for commenting. Your ‘non-medical’ status only allows you to apply a more open mind to these complex problems

  12. March 12, 2011 10:08 pm

    With the advent of new medical technology called Molecular Diagnostics, we have a new double edged sword for the new world order. While DNA and RNA analysis will prove useful in reducing medical costs and take the guess work out of prescribing drugs, there is the danger of DNA profiles being used in the nefarious new world order conspiracy for domination and control.


  1. Automated Care: Thermostat of Health or Ponzi Scheme? « The cHealth Blog

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: