Watson vs. Humans

IBM’s Watson computer, which recently made an impressive showing on Jeopardy!, will next be tasked with helping clinicians in the healthcare environment. In a deal with Nuance Communications, Inc., Watson’s Deep Question Answering, Natural Language Processing, and Machine Learning capabilities will be combined with Nuance’s speech recognition and Clinical Language Understanding to address the increasingly complex demands of healthcare diagnostics. The IBM press release has more details (and lots of capitalized words) on the deal.

One of the hotter topics surrounding Watson’s dominant performance over the course of the three day demo was its (his?) spectacular failure in Final Jeopardy!. The category: US Cities; the answer: “Its largest airport was named for a World War II hero; its second largest, for a World War II battle.” Watson answered “What is Toronto?” IBM’s explanation suggests that Watson relies on statistical analysis for its reasoning and that it downgrades the category description in importance because in this game, they can be vague or downright tricky. But you and I would immediately recognize “US Cities” as a constrained set in which the answer must reside. Watson was confused further, IBM explained, because there are several “Torontos” in the US.

Oh-Oh! If you look at the map they provide showing the various cities in the US named Toronto, it is clear poor Watson must depend on what others tell him is true. As of this writing, the map (click on “What is Toronto?”) labels “Toronto, MO” as being in Illinois, and “Toronto, IL” as being in Missouri. I thought Watson was impressive, and I think IBM has made great strides. As Watson demonstrated, machines can make mistakes. As the map at IBM’s site demonstrates, humans make mistakes too. We just have to remember that at least for now, the only knowledge computers can learn, is what we offer them…

[Update February 21, 2011: Alas, IBM appears to have removed the map above from its website. Cached versions don’t link to the map anymore either…]

© 2011 Rod Piechowski, Inc. Consulting

Leave a Comment

Embedded Knowledge: Scanner Opportunity

It seems several patients who have received CT brain perfusion scans at some US hospitals have been exposed to an apparent overdose of radiation. The New York Times followed up on the story, which revealed a couple of interesting points. While the scanners have built-in features that allow lower doses of radiation, in some circumstances the scanners actually increase the dose. Oddly, the machines in question do not warn clinicians when the amount of radiation exceeds a safe threshold. Here, technology is used as a valued clinical tool, but it remains isolated from some important facts we know about its use. This lack of “self-awareness” on the part of the technology reinforces the role of people, culture and process in the data life cycle. For example:

  • With information technology, scanners (and other pieces of equipment) could record or make available to the electronic health record, the machine identity and the dose administered, so that the event can be linked to the patient for later analysis. This is as much a patient safety issue as it is one of quality reporting. It should be a simple matter to generate a list of all patients that have been treated using a particular piece of equipment. Part of the problem here is in the creation of medical device identifiers and standardized protocols for communicating among them. This work is underway, but still not completely implemented;
  • Scanners could be made to have an awareness of current guidelines for radiation dosages, and they should not operate at those levels unless deliberately overridden by the clinician. We already use information technology to check for drug interactions, and this is the same thing;
  • Guidelines should be standardized in structure and application, to ensure that there is no variance from vendor to vendor;
  • Physicians must actively participate in developing the data life cycle (from data to knowledge). If they are responsible for entering data, they have every right to expect that it is contributing to the development of better knowledge systems, which should continue to improve with each vendor update;

Technology performs as it is programmed to do. We cannot blame it for shortcomings that are the result of muddy processes. Figuring out how to do this properly involves cooperation among vendors, clinicians, the government, and many others. This is an example of knowing better (we know what constitutes a radiation overdose) but not yet having the processes in place to embed that knowledge into our clinical tools and the information systems that we will increasingly use to connect them. We must create processes that allow us to more fully trust the technology now, so that we can come to rely upon it, and build upon its role in the future.

The full NYT story is here.

-Rod Piechowski

Copyright © 2010, Rod Piechowski, Inc., Consulting

Leave a Comment

The Data Life Cycle

This year and most of next will likely be spent figuring out how to get EHR systems installed in hospitals and physician offices. Most of the organizations making this transition will be doing so as a reaction to the federal government’s incentives and penalties for not doing so. But there is a much bigger, long-range picture that should provide a framework useful to healthcare leaders. The meaningful use rules require providers to report quality measures to various agencies for analysis. Some of this will be used to determine whether or not the system is being used at all in order to manage incentive payments, especially in the beginning. Other data, some not even conceived of yet, can be used to help improve what we know about medicine and to develop best prevention and treatment practices. This diagram shows one way to look at the process (click to enlarge).

From "data" to "knowledge"

Today, at least in the US, we’re in the first quadrant in the lower left corner. Here, we are just entering the first round of data collection on a national level. In the next quadrant, data will be aggregated and studied for various purposes. Early on, the type of data collected will help determine compliance with meaningful use, and other quality reporting initiatives. In quadrants 1 and 2, it’s still data, or at best “information,” while quadrants 3 and 4 represent the process of learning from that information and then finding ways to transfer that knowledge into support systems that can improve point of care quality and value to both the patient and the healthcare system.

The interesting questions that must be addressed in order to accomplish this are at the heart of “The Art of Medicine and Technology,” and what we want to explore on this site. Some questions arise at each point along the cycle:

Quadrant 1:

  • What kind of data must we collect at this point in order to add value in Quadrant 4?
  • Are we collecting it now?
  • Will it require re-engineering systems?

Quadrant 2:

  • Who does this work?
  • Who has access to the data?
  • How do we ensure that the analysis is unbiased and adds to the art of medicine?
  • Are there any ethical issues involved in doing mass aggregation and analysis of health data?

Quadrant 3:

  • What do we want to learn?
  • Can technology help us uncover unanticipated discoveries?

Quadrant 4:

  • How will this knowledge translate to support systems?
  • Can we trust what we do?
  • Is it ethical to transfer knowledge “in progress” to computerized systems?
  • How many cycles are required before physicians move from trust to reliance on technology?

As always, your comments are welcomed!

-Rod Piechowski

Copyright © 2010, Rod Piechowski, Inc., Consulting

Leave a Comment

Clinical Support Beyond the Interface

If one of the stated benefits of information technology is to support the physician’s decision making process, how will the physician’s role evolve as we increasingly come to rely on technology for support? This question assumes that we will continue to use technology to support, not replace the physician (or nurses or other clinicians for that matter). Until the time comes when we have managed to capture and represent all current medical knowledge through technology, the physician is likely to remain the primary observer; technology cannot yet take histories or connect the important dots within a complex personal narrative. Meanwhile, there will be new challenges that must be understood and managed; for example:

  • We must go deeper than the clinician-machine interface, and examine whether the data we currently collect is a capable foundation for a future of sophisticated analysis and support;
  • The concept of “interface” must evolve beyond the presentation of information and related input/response mechanisms to include clinical technology’s apparent cognitive process, so that it more closely complements clinical decision making;
  • Clinicians must take leadership roles in analyzing the current interface and describing what works and what doesn’t work;

Obviously,the art of medicine will increasingly include the ability to accurately describe the patient narrative to a technical support system in a format that can be processed not only for immediate patient benefit, but in a way that can be collected and analyzed in order to build medical knowledge and eventually, better quality of care for all. Physicians, nurses and other clinicians should be at the center of this movement.

Some questions:

  1. If you are a provider, what do current systems get right?
  2. What do they miss?
  3. In what ways do current EHR systems compromise your ability to provide quality care?

-Rod Piechowski

Copyright © 2010, Rod Piechowski, Inc., Consulting

Leave a Comment