Friday, July 18, 2014

The Learning Healthcare System

The ONC has proposed a ten-year vision[1] for interoperability in healthcare information technology that divides this time into three periods. Years 1-3 are devoted to achieving technical interoperability & sharing of healthcare information; while years 4-6 focus on using this shared information to improve quality & lower cost. Years 7-10 are labeled the “learning health system” & described as “Individuals, care providers, public health (officials) and researchers contribute information and learn from information shared across the health IT ecosystem, with rapid advancement in methods for deriving meaning from data without sharing PHI.[2]” What “learn” might mean in this context is an interesting question… First, let’s look at what the ONC appears to mean to by it, & then we’ll look more broadly.

The ONC lists a number of characteristics of the healthcare system during this timeframe (2021-2024) in no particular order:
  1.        Enhanced healthcare information contribution & sharing across clinical (provider & patient), public health & research areas
  2.      More functional technical tools available to apply to this data- search & visualization are examples
  3.      General availability of “patient-centered” outcomes research results
  4.      Continuous learning through predictive & retrospective analysis of aggregated data
  5.      Availability of patient-specific clinical decision support taking into account the patient’s genetic profile, clinical history, local public health trends & relevant socio-cultural trends (social determinants)
  6.      Improved public health surveillance integrated with point-of-care decision support

This is actually quite a good list, but much of it is either already available or will be available in the near future (18-24 months). Let’s review where (I think) we are with this list, & then let’s explore some of the possibilities for a learning healthcare system in the ONC’s 10-year timeframe. The issue is, as William Gibson famously observed, “The future is already here, it’s just not evenly distributed”[3]
  •       Point 1 - Enhanced information contribution & sharing across different healthcare contexts, is what the ONC’s years 1-3 are about. A high level of data interoperability will allow contribution of information from a variety of sources, including patients, for a variety of purposes. Interoperability is a frustrating issue today, as many vendors can’t effectively share data across their own product lines. Hopefully this will change in the next three years. We have achieved high levels of data interoperability in other industries, & notwithstanding that many people working in healthcare believe its data to be substantially more complex & sensitive to error than data in, say… banking or aerospace design, I think that with a pragmatic approach, not just to standards & certification, but also to vendor architecture, API development & in the field data sharing, that we can achieve appropriate levels of interoperability in this timeframe.

  •         Point 2 - We have many advanced functional tools available today; we’re just not using them. This is often because HIT vendors are loath to integrate their systems (practice management, EHR, lab reporting etc.) with external tools, but prefer to develop tools themselves. This doesn’t always work for several reasons: the vendor may not have the necessary skill &/or resources to develop such tools, the vendor’s business model may not include such development, the vendor may have allocated this development to partners who have their own agenda & business model(s) & many other factors. There is another much larger issue; most of these HIT systems are architected on an enterprise model that is not as scalable or flexible as contemporary designs. As HIT products migrate to contemporary infrastructure (Hadoop, NoSQL etc.), interoperability & integration will become possible at larger & larger scale.

  •        Point 3 – The American Health Information Management Association provides a good introduction to the variety of healthcare research data already available[4]. In addition the Agency for Healthcare Research & Quality (AHRQ, HHS) & the Healthcare Information Management Systems Society (HIMSS) both make a good deal of data available. The Centers for Medicare & Medicaid has also recently made a substantial amount of claims & provider payment data available[5]. This trend will continue, especially as large healthcare organizations begin making public the results of analyses of ultra-large data sets (see immediately below).

  •         Points 4-5 – These points are linked, especially at the point-of-care. Continuous learning, in this context, is the ability to develop new knowledge & strategies for using that knowledge based on an understanding of current & previous results & information. Many systems currently perform retrospective (& in some cases predictive) analysis of large amounts of healthcare data to determine patterns in both clinical & operational areas for healthcare organizations (Point 4). When this type of analysis is done based on specific patient characteristics at the point-of-care, diagnosis & treatment planning can be based on the empirical data & learning is brought forward with each analysis (Point 5). Examples include:

o   Mayo Clinic - AWARE “bedside consulting” system (5M patient records over 15 years)
o   Beth Israel Deaconess Medical Center (Boston) – Clinical Query system (2.2M patient records)
o   Kaiser Permanente – Natural language query system (9.1M patient records over 10 years)
o   Partners Healthcare (MA) – Queriable Inference Patient Dossier
o   IBM/Wellpoint – “Dr. Watson”, deep understanding system applied to healthcare information (cancer diagnosis)
  •         Point 6 – Systems today used analysis of regional to hyperlocal trends in disease patterns to characterize the public health context of specific locations. These analytic results can be combined with point-of-care recommendation systems to improve diagnosis & treatment. An example would be Google Flu Trends although there are many apps such as Healthify[6] that provide hyperlocal services recommendations based on EHR encounter information.

To summarize: current & near future HIT systems can provide appropriate levels of data interoperability, new architectures & tools are already making HIT & the analysis of HIT data much more scalable & performant, large amounts of research data is already available, even to patient & consumers, ultra-large scale pattern matching in healthcare data sets can provide the basis for both continuous learning by systems & their human users, this learning is already being applied to point-of-care recommendation systems that draw from millions of patient records & finally current & near future HIT systems are reporting large amounts of public health data which is being analyzed to provide better understanding of large scale health phenomenon & eventually integrated with point-of-care recommendation systems.

OK – so what isn’t being done? & What could be done? One major thing is that the more information you include in these analyses, the better the results are, so a broader range of inputs should be included. Such information streams as public social media, data on social determinants, even online & conventional shopping data can be important in understanding a person’s health profile. A recent story in Bloomberg Business Week[7] described the use of credit card purchasing data to supplement providers information about patient behavior – are you actually picking up your prescriptions, buying a lot of junk food, shopping at Big & Tall etc. Marketers use this kind of data routinely in other industries, so why not in healthcare[8]? There are almost an infinite number of information sources that could be used productively, once the sociocultural issues are understood & ameliorated.

There are also new kinds of analysis that are being developed. An example would be work at Oxford University[9] where an algorithm analyzes ordinary photographs & can predict genetic anomalies & diseases. Hundreds or more of such new uses of information are being developed & will be available (& more evenly distributed) in the near future.

But what about learning, I hear you say… A recent issue of Health Affairs was devoted to the theme of “big data”. One of the articles reviewed work on a learning health system, talked about impediments & made some predictions[10]. This work used the following definition of a rapid learning healthcare system,  “a health system that learns as quickly as possible about the best treatment for each patient—and delivers it. This kind of system draws on a much faster knowledge production process: from discovery science, to new therapies and clinical science that can inform personalized medical care, to better-informed physicians and patients.” This idea of a rapid learning health system was first proposed in 2007[11], & the Rapid Learning Project & others have done a good deal of work, mostly workshops & policy papers. As we have seen, however, this vision of deep analytics applied at the point-of-care to diagnosis & treatment of individual patients is already in place in a number of settings. This is a lot, but a learning healthcare system has to be more than this.

As already stated, learning can be thought of as “the ability to develop new knowledge & strategies for using that knowledge based on an understanding of current & previous results & information”. This ability is continuous & ongoing, the implication for a healthcare system is that whenever an actor (provider, patient, caregiver &c.) is using a part of the system, the system is moderating the user’s context (usage) & anticipates what information & analysis may be relevant. The system may then give the user the opportunity to request this information; which can be diagnosis, treatment suggestions, data on treatment, analysis of alternatives, public health implications, information & recommendations on amelioration of social determinants & many other possibilities. In order to do this, the system would have to have access to a great many data sources as well as have deep understanding, hypothesis testing & recommendation capability (in the Dr. Watson mode) & an interface that allowed substantial interaction with the user in a manner that was non-threatening & productive. In addition, the system would serve as an information source & liaison for public health & social systems as well as healthcare systems at other organizations (that the user might be associated with). It might communicate with the user through a variety of devices & in a context (app or portal) that they were used to. We’re obviously not there yet.

Can we get there? I believe that we can, but we have to focus. The we, here, is not only the producers of software & systems, but providers, patients, caregivers & healthcare organizations (if corporations can be people, so can healthcare organizations)[12].  Here is my (partial) list of what’s important:
  •         Facilitate real interoperability for healthcare systems – The development & adoption of standards does not automatically convey interoperability[13]. A lot of really hard work has to be done to ensure that even things like standard documents (like C-CDA) can be assimilated by multiple system & that the data, once imported, makes sense. This could easily take more than the three years the ONC has allowed.
  •         Develop learning in the healthcare context – Learning is not just analyzing ultra-large information “lakes” to do pattern matching & make diagnosis & treatment recommendations. It is creating new knowledge & new strategies for developing & using knowledge. In this sense, it is more like IBM’s Watson, that attempts a semantic understanding of material & then forms & tests hypotheses to answer questions about that material than it is like most of the point-of-care recommendation systems currently in use or under development. These systems do some form of pattern matching to an initial set of data about a patient, & if their information source is large enough, may discern patterns that can be translated into recommendations with a very high “probability” of relevance (if not correctness, based on the analyzed data).

An aside is relevant here. The current point-of-care systems we are talking about are not conventional rule-based systems. They do not have domain-specific heuristics about cancer diagnosis & therapy (as an example). The heuristics that they have are about semantic normalization, general pattern matching, visualization etc. They operate by taking input on a patient’s condition & comparing that to (potentially) millions of patient records to determine what the most effective diagnoses & treatment plans have been for those specific inputs. Earlier “expert” systems operated quite differently by taking the input on patient condition & executing a set (sometimes as large a set as 10s-of-thousands) of domain-specific rules. These systems often had relatively high percentages of effectiveness – Mycin,[14] an expert system (with approximately 600 rules) that made recommendations for treatment of bacterial infections, developed at Stanford University in the 1970s, had an effectiveness of 69% which was higher than that of medical experts surveyed. Current point-of care systems have an effectiveness of (close to) 100% relative to their information base. This is a quite different kind of effectiveness than that of a rule-based system (& discussion of the causes of this difference are beyond the scope of this current blog).

Learning in healthcare systems won’t come about by itself. It will have to be facilitated by government-public-private partnerships & specifically funded. Real prototypes & production systems will have to be subsidized & deployed for testing & feedback. A project similar to that which produced the NwHIN (originally NHIN) needs to be planned & quickly started so that working groups can begin describing the functionality of healthcare learning & companies can be selected to begin prototyping. Standards will not be as important initially in this effort (as they were in NHIN development) as innovation will be more important. The companies should not just include the usual suspects (IBM, Google, Microsoft, etc.) although they are important, but should also include some smaller organizations with different ideas that may (or may not) be layered on the infrastructure provided by their larger brethren.

A learning healthcare system is a great goal, but it won’t happen without a lot of support (funding) & leadership. Let’s start now.



[1] Connecting Health and Care for the Nation: A 10-Year Vision to Achieve an Interoperable Health IT Infrastructure. ONC. June 2014. http://healthit.gov/sites/default/files/ONC10yearInteroperabilityConceptPaper.pdf, accessed 25 June 2014.
[2] ONC. 2014. P.8
[3] William Gibson, interview in The Economist, 4 December 2003.
[4]http://library.ahima.org/xpedio/groups/public/documents/ahima/bok1_050345.hcsp?dDocName=bok1_050345
[5] http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/
[6] https://www.healthify.us/en
[7] http://www.businessweek.com/articles/2014-07-03/hospitals-are-mining-patients-credit-card-data-to-predict-who-will-get-sick?utm_content=buffer7874a&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
[8] Privacy issues are the first reason to think twice about it, but we have already ceded our privacy when Amazon or Google makes purchasing suggestions for us.
[9] Ferry, Q. et al. 2014. Diagnostically relevant facial gestalt information from ordinary photos - See more at: http://elifesciences.org/content/3/e02020#sthash.c8S7Tm7d.dpuf
[10] Etherege, L.M. 2014. Rapid Learning:A Breakthough Agenda. Health Affairs. vol. 33 no. 7 1155-1162. July 2014.
[11] Etheredge LM.A rapid-learning health system. Health Aff (Millwood). 2007;26(2):w107–18. DOI: 10.1377/hlthaff.26.2.w107.
[12] The doctrine of corporations as people has been established in the U.S. as early as 1819 (Dartmouth College vs. Woodward, 17 U.S. 518 (1819) & as recently as Burwell vs. Hobby Lobby (573 U.S. ___ 2014)
[13] As I stated in my last post, during the development of interoperability standards for CORBA, a rep from one of the other vendors (I was representing the Digital Equipment Corporation) told me he would be compliant if my system sent his system a message & his system sent my system back an error message!
[14] http://en.wikipedia.org/wiki/Mycin

Thursday, July 3, 2014

The ONC Interoperability Vision: An Opinion

The Office of the National Coordinator for Health Information Technology recently issued a “10-year vision” paper on interoperability in the HIT infrastructure[1]. The 10 years are broken up into three different time periods:
  •        3-year agenda: Send receive, find & use health information to improve health care quality 

This set of goals is the primary interoperability functionality proposed by the ONC & is focused around the development of “an interoperability roadmap as articulated in HHS Principles and Strategy for Accelerating Health Information Exchange[2]. This second document emphasizes several tactics for accelerating the use of HIE including: use of DIRECT & development of appropriate Stage 2 & Stage 3 Meaningful Use criteria, developing certification for HIE interoperability & a focus on security & privacy. This is all very well, but there are a number of issues I’d like to point out. First, the emphasis appears to be providing “interoperability” through HIE. the fact is that HIEs are having a hard time developing sustainable business models (regulatory compliance for interoperability is probably not a sustainable model, especially with no Federal money available for it),  & most of them are having trouble exchanging anything but the simplest data. Interoperability is usually provided through standardization of APIs (or other exchange mechanisms) across application boundaries. How does HIE-based interoperability work where healthcare organizations do not participate in an HIE & their EHRs are not interoperable? It seems that broader approach may be necessary.

Second, my personal experience with this approach to interoperability spans about 25 years of system development & includes participation in many standards efforts. Perhaps my most telling experience of defining standards-based interoperability came when I was Digital Equipment’s representative to the OMG effort to define interoperability among (CORBA-based) object systems. The representative from Sun Microsystem, with whom I had been arguing for about 3 months, finally proposed that interoperability would be served if my system sent his system a message & his system sent my system back an error message. This is where I feel we currently are with healthcare information interoperability.

As an example, a paper recently published (3 July 2014) in the Journal of the American Medical Informatics Association[3] looked at interoperability of Stage 2 Meaningful Use certified EHRs as the ability to exchange C-CDA documents (a Stage 2 requirement). In 91 cases, a total of 615 mistakes were found, many of which would have affected the quality of care. These included: incorrect name of medication, incorrect dosage amounts, incorrect dosage units, & incomplete references to narrative text, among others.

Finally, what about improving health care quality. The ONC’s document states: “we will work with federal and state entities to advance payment, policy, and programmatic levers that encourage use of this information in a manner that supports care delivery reform, improves quality, and lowers costs.” This seems appropriately ambiguous & difficult given the current policy & political environment.
  •         6-year agenda: Use information to improve health care quality & lower cost

The ONC’s description of this time period describes a large variety of data aggregations becoming available for use by individual providers & healthcare organizations to use analytics to improve quality & lower cost. I think that this is part of the solution for these goals, an important part, but there are other uses of this data than developing new quality measurements & payment models, as important as these might be. This past April, I wrote a post entitled Re-engineering Healthcare: The View from Other Industries (4 April 2014, http://posttechnical.blogspot.com/2014/04/reengineering-healthcare-view-from.html). In this post I emphasized that industries such as Auto Manufacturing, Aerospace & Information had used multisource data aggregations as part of a re-engineering effort that focused on the redesign of workflows & work processes & the more efficient & effective use of R&D resources. This work has yet to be done in any realistic way in healthcare (other the workflow redesign necessary for EHR use, & it’s not clear how efficient & effective that is with respect to outcome improvement or cost reduction), & it will have to be done if quality & cost goals are to be met.
  •         10-year agenda: The learning health system.

The ONC states that: “The evolution of standards, policies, and data infrastructure over the next 10 years will enable more standardized data collection, sharing, and aggregation for patient-centered outcomes research. Continuous learning and improvement will be feasible through analysis of aggregated data from a variety of sources.” This is a topic for another post. I spent about ten years working on issues of advanced reasoning & problem solving at Stanford & The Digital Equipment Corporation[4] & I’ll write (real soon now) on what I think a learning healthcare system would be like.

The ONC’s tactics for achieving this vision consists of five building blocks:
  1.      Definition of core technical standards & functions
  2.      Certification to support adoption & optimization of HIT products & services
  3.      Privacy & security protections for health information
  4.      Developing a supportive business, clinical, cultural & regulatory environment
  5.      Rules of engagement & governance of HIE


Of these tactics, the first, second & fifth are (IMHO) complicated but feasible to achieve. The question is (as evidenced by the D’Amore paper & lots of HIE data), will the development of standards, certification & ROE for HIE actually improve interoperability. The answer is yes, over time – it’s the time element that’s the problem. 2-3 years of development & acclimatization would be annoying, but acceptable (maybe even optimistic). 5-8 years would mean that this approach would not be successful.

Tactic 3, privacy & security concerns are appropriate & inevitable. Balancing these concerns with the need for shared healthcare data for treatment, operational & research purposes has proven difficult & I don’t expect it will get easier as these regulations evolve in the next 2-5 years. One thing about this area is that people’s expectations are being set (& their resistance to various commercial & government tactics) by current practice including the NSA surveillance efforts, security & privacy concerns with current social media like Facebook, etc. & new interaction models on the web such as Snapchat. People are very aware of these issues, but developing healthcare systems that have appropriate function in these areas will take time (3-5 years)

Finally, Tactic 4 – working toward supportive environments… Listing all of the issues involved in this tactic would take pages & pages. Suffice to say that this will never be fully aligned with the ONC’s goals (again IMHO), but over time the edges will get chipped off so that interoperability may be possible.

Time seems to be the common theme here. I expect it will take 3-5 years of real effort to get to the point where these tactics bear fruit. The real question is in this political & cultural environment, do we have 3-5 years to evolve to an interoperable, more cost efficient & clinically effective healthcare system.

Next – What could a “learning healthcare system” look like? & a post on the tension between privacy & usage in healthcare systems




[1] Connecting Health and Care for the Nation: A 10-Year Vision to Achieve an Interoperable Health IT Infrastructure. ONC. June 2014. http://healthit.gov/sites/default/files/ONC10yearInteroperabilityConceptPaper.pdf, accessed 25 June 2014.
[3] D’Amore, J.D. et al. 2014. Are Meaningful Use Stage 2 certified EHRs ready for
interoperability? Findings from the SMART C-CDA Collaborative. JAMIA. Published online: 0:1–9. doi:10.1136/amiajnl-2014-002883. Accessed 25 July 2014.
[4] c.f. Hartzband, D.J., L. Holly, and F.J. Maryanski. 1987. The provision of induction in data-model systems: I. Analogy. International Journal of Approximate Reasoning (IJAR) 1(1):1-17. &
Hartzband, D.J. 1987a. The provision of inductive problem solving and (some) analogic learning in model-based systems. Group for Artificial Intelligence and Learning (GRAIL), Knowledge Systems Laboratory. Stanford University. Stanford, CA, USA. 6/87.