Embedded Knowledge: Scanner Opportunity

It seems several patients who have received CT brain perfusion scans at some US hospitals have been exposed to an apparent overdose of radiation. The New York Times followed up on the story, which revealed a couple of interesting points. While the scanners have built-in features that allow lower doses of radiation, in some circumstances the scanners actually increase the dose. Oddly, the machines in question do not warn clinicians when the amount of radiation exceeds a safe threshold. Here, technology is used as a valued clinical tool, but it remains isolated from some important facts we know about its use. This lack of “self-awareness” on the part of the technology reinforces the role of people, culture and process in the data life cycle. For example:

  • With information technology, scanners (and other pieces of equipment) could record or make available to the electronic health record, the machine identity and the dose administered, so that the event can be linked to the patient for later analysis. This is as much a patient safety issue as it is one of quality reporting. It should be a simple matter to generate a list of all patients that have been treated using a particular piece of equipment. Part of the problem here is in the creation of medical device identifiers and standardized protocols for communicating among them. This work is underway, but still not completely implemented;
  • Scanners could be made to have an awareness of current guidelines for radiation dosages, and they should not operate at those levels unless deliberately overridden by the clinician. We already use information technology to check for drug interactions, and this is the same thing;
  • Guidelines should be standardized in structure and application, to ensure that there is no variance from vendor to vendor;
  • Physicians must actively participate in developing the data life cycle (from data to knowledge). If they are responsible for entering data, they have every right to expect that it is contributing to the development of better knowledge systems, which should continue to improve with each vendor update;

Technology performs as it is programmed to do. We cannot blame it for shortcomings that are the result of muddy processes. Figuring out how to do this properly involves cooperation among vendors, clinicians, the government, and many others. This is an example of knowing better (we know what constitutes a radiation overdose) but not yet having the processes in place to embed that knowledge into our clinical tools and the information systems that we will increasingly use to connect them. We must create processes that allow us to more fully trust the technology now, so that we can come to rely upon it, and build upon its role in the future.

The full NYT story is here.

-Rod Piechowski

Copyright © 2010, Rod Piechowski, Inc., Consulting

Leave a Comment

The Data Life Cycle

This year and most of next will likely be spent figuring out how to get EHR systems installed in hospitals and physician offices. Most of the organizations making this transition will be doing so as a reaction to the federal government’s incentives and penalties for not doing so. But there is a much bigger, long-range picture that should provide a framework useful to healthcare leaders. The meaningful use rules require providers to report quality measures to various agencies for analysis. Some of this will be used to determine whether or not the system is being used at all in order to manage incentive payments, especially in the beginning. Other data, some not even conceived of yet, can be used to help improve what we know about medicine and to develop best prevention and treatment practices. This diagram shows one way to look at the process (click to enlarge).

From "data" to "knowledge"

Today, at least in the US, we’re in the first quadrant in the lower left corner. Here, we are just entering the first round of data collection on a national level. In the next quadrant, data will be aggregated and studied for various purposes. Early on, the type of data collected will help determine compliance with meaningful use, and other quality reporting initiatives. In quadrants 1 and 2, it’s still data, or at best “information,” while quadrants 3 and 4 represent the process of learning from that information and then finding ways to transfer that knowledge into support systems that can improve point of care quality and value to both the patient and the healthcare system.

The interesting questions that must be addressed in order to accomplish this are at the heart of “The Art of Medicine and Technology,” and what we want to explore on this site. Some questions arise at each point along the cycle:

Quadrant 1:

  • What kind of data must we collect at this point in order to add value in Quadrant 4?
  • Are we collecting it now?
  • Will it require re-engineering systems?

Quadrant 2:

  • Who does this work?
  • Who has access to the data?
  • How do we ensure that the analysis is unbiased and adds to the art of medicine?
  • Are there any ethical issues involved in doing mass aggregation and analysis of health data?

Quadrant 3:

  • What do we want to learn?
  • Can technology help us uncover unanticipated discoveries?

Quadrant 4:

  • How will this knowledge translate to support systems?
  • Can we trust what we do?
  • Is it ethical to transfer knowledge “in progress” to computerized systems?
  • How many cycles are required before physicians move from trust to reliance on technology?

As always, your comments are welcomed!

-Rod Piechowski

Copyright © 2010, Rod Piechowski, Inc., Consulting

Leave a Comment

Some Thoughts on Privacy

Imagine that the following is true:

  1. There is no social stigma attached to any disease or health condition. You have what you have, and no one cares but you and your physician.
  2. You are not denied coverage, care, or a job because you have any particular disease or health condition.

How would that change things?

For one thing, there would be much less incentive to know things about each other in order to benefit from that knowledge. Our efforts to ensure privacy would focus on personal respect and professional behavior. Today however, we make and enforce rules and regulations (HIPAA, breach notification, etc.) and we create technology (encryption, disclosure tracking and reporting, etc.) to reduce the risks. Despite huge expenditures to reduce that risk, we’re learning that it’s not so easy. In fact, the HITECH legislation passed last year in the US makes an interesting assumption: by including breach notification requirements, legislators assume that breaches of protected health information (PHI) will continue. And how can they not?

Just last week there were two stories of health information falling outside of the control of providers. Wellpoint’s website glitch revealed information on thousands of customers to anyone who knew how to modify the URL; meanwhile FedEX apparently lost a package destined for Lincoln Medical and Mental Health Center in New York. The package contained CDs filled with PHI from one of its vendors. In the Wellpoint case, it is believed that a third-party vendor responsible for system upgrades and security did not make all of the necessary changes during an upgrade. In both of these instances, the breach occurred not because of a failure of technology, but due to a failure in the way it was integrated into the organizational information flow and related processes. The reality is that people make mistakes. As systems evolve and become ever more complex, the quality control process is more likely to include lots of crossed fingers.

Again, this is part of the larger question of just how we manage the process of coming to first trust, then rely upon our technology solutions. As of today, there are 107 posted notifications of breaches involving 500 or more individuals on the US Health and Human Services (HHS) website. That’s just between late September, 2009 and today. This is going to be a huge challenge. It will be interesting to see if we continue to settle for breach notification and free credit monitoring, or if we solve this another way.

While I don’t expect that the scenario imagined above could come about without massive cultural changes, it is a different way to look at the problem. And even if we could change numbers 1 and 2 above, there is a 3rd piece that would require change as well: as long as identity theft is accepted as a cost of doing business, it remains a profitable enterprise. And that means someone will be looking for our data.

-Rod Piechowski

Copyright © 2010, Rod Piechowski, Inc., Consulting

Leave a Comment

From Trust to Reliance

Yesterday I noted how deeply the concept of trust is intertwined into everything we do in healthcare. It really creates a web of interdependent individuals, organizations and systems, each of which must trust that the other is effectively carrying out its role in the overall delivery of healthcare. Now the Leapfrog Group has issued a report summarizing the results of its computerized provider order entry (CPOE) evaluation tool. The data gathered between 2008 and 2010 will be of interest to anyone wondering whether CPOE should be included in the first level of meaningful use. According to the report, over 200 hospitals tested their CPOE systems to see if they caught common medication errors, including those that could be fatal. On average, the systems missed about one-half of the routine medication orders and a third of the potential fatal orders.

Leapfrog recommends testing and monitoring for all technology adoption, and that best practices are shared more transparently. “Competition is healthy, but in the case of IT adoption, collaboration is far better” the group notes. The group further recommends more implementation of CPOE, which can, when set up correctly, reduce medication errors. Fair enough. And now some questions:

  • How do we determine the best practice? Are there “flavors” of best practice for different demographics?
  • How can that best practice be represented and supported by our technologies in an efficient way?
  • How can we keep it current?
  • How do we begin moving from “having CPOE is good” to “having good CPOE is better”?
  • How can we integrate and begin to depend upon technology that is still evolving while maintaining quality in a high-risk, high-stakes environment?
  • Is it ethical to use technology to support patient encounters when we know the technology is still in its infancy?

The last two questions are most interesting, and probably the most challenging. On one hand, we are trying to create technical support systems that can reduce risk and alleviate some of the burden by transferring some of our knowledge responsibilities to the technology. On the other hand, we add to the risk and the burden because we must ensure that this technology mentoring process does not result in lower quality care, or deaths. In short, how do we know when it’s okay to begin trusting our systems? And coming to trust our technology is not the same as coming to rely on technology. Trust is a forerunner to reliance, and it holds true for the patient – physician relationship too.

-Rod Piechowski

Copyright © 2010, Rod Piechowski, Inc., Consulting

Leave a Comment