Earlier this summer, I drove down to the Southeast Linux Fest in Spartanburg, South Carolina. One of the talks that stood out to me was given by Heather Holl, bioinformaticist and Slackware Linux team member. She talked about the open source tools she uses most in her work in equine genomics. I was especially impressed at how she used standard, open source Linux command-line tools to get her job done.
Up at 5AM: The 5AM Solutions Blog
Tags: Python, R, open source, genomics tools
A national patient ID is one of the more taboo subjects in United States law. As one writer for the College of American Pathologists reports, “In some federal government offices, all one has to do to stop a conversation cold is mention a national patient identifier. That’s how adamantly Congress, in a 1998 bill, outlawed any plans for, consideration of, and even research on a national system of assigning patient ID numbers.”
In fact, no government money can be spent even investigating the possibility of a national patient identifier, effectively making the law difficult to reverse without a large push from private industry.
If it were created, a national patient identifier would provide a unique number to identify an individual in health systems across the United States. It could be used to pull or push health data for a patient across the country on-demand without complication or fuss.
Without a national patient identifier, industries face the challenge of mapping patients in their internal systems to patients in systems outside of their operating networks. Typically, this involves sending patient names, addresses, phone numbers, social security numbers, and other identifying data across secure channels to ensure that the systems really are talking about the same person. Mistakes and match-refusals are common. Spelling mistakes and formatting inconsistencies while entering data can prevent systems from recognizing somebody as the same person (“Bob” vs “Robert,” or “Ft. Worth” vs “Fort Worth”). Policy differences also can prevent successful matching. Some health vendors will outright refuse, by policy, to make a patient correlation without having been sent the social security number of a patient. Other vendors will refuse to ever send a social security number over the Internet, even if the channel is secure. Systems like this will never be able to communicate with such limiting policies in place.
A national patient identifier cuts through all of this red tape. It replaces complex methods of patient matching that may or may not be successful, and reduces the likelihood of confusing people with similar information for one another. Overall, both the quantity and quality of information that can be gathered under a unique identifier is greater (and cheaper!) than a haphazard industry approach trying many different solutions.
Why is it important to enable systems to “match” patients? Imagine you’re lying unconscious in an emergency room, and the doctor can’t find a record to indicate your penicillin allergy. While the idea of a ER physician being able to conduct an instant search across the globe to find your records is itself still largely theoretical, realizing that theory is much, much more difficult without a simple way to identify patients.
The current government policy feels contradictory. On one hand, government initiatives such as the Nationwide Health Information Network (and implementation in CONNECT) suggest that moving health information into the electronic space, to be readily accessible by consumers of health data, is a national priority. On the other hand, policies preventing investigation into a national patient identifier stifle the ease of rolling out such systems nationwide and limit their accuracy.
Privacy is a major concern with a national patient identifier; however, both the concept of uniquely identifying numbers associated with people as well as the notions of ‘security breach’ are no different with a national patient identifier than with the systems that are currently in place. The information that could be gained from learning the national patient identifier would be little different from the information gained under the current approach of using names, social security numbers, and addresses. Once a person has this information, and somehow gains access to the health network, it will not matter if they have the NPI number or the separate bits of information. Also, the concept of a unique identifier exists nationally from birth in the US in the form of a social security number, and from the point of ID issuance at the state level. It is a little known fact that many states create drivers license numbers based off of the full name and date of birth of an individual. Maryland is one of those states, and based upon that number alone, anybody can tell who you are! You can try it yourself here. The real security is in maintaining the network and reducing fraud.
While in no way would the implementation of a national patient identifier be easy, it should at least be one of the possibilities that can be considered at the national level. It is in the interest of private industry to speak up and express their concern, since current government policy forbids any possible action internally.
Tags: EHR, electronic health records, national patient identifier
The summer CONNECT code-a-thon wrapped up today. This two-day event brings together health IT leaders to share feedback and write code for CONNECT, the open-source software solution that supports health information exchange. CONNECT implements the Nationwide Health Information Network (NwHIN) standards to ensure the compatibility of secure health information exchange throughout the country. The CONNECT solution is used by state and regional health information exchange organizations; by private companies, practices, hospitals, and collaboratives; and in federal agency health information exchange initiatives, including those by the Department of Defense, Centers for Disease Control and Prevention, Department of Veterans Affairs, Social Security Administration, and Centers for Medicare & Medicaid Services.
As members of the CONNECT development team, we were happy to meet the community of CONNECT users and have a chance to hear first-hand how they use CONNECT, what improvements and new features they’d like to see, and how health information exchange fits in with their organizations’ missions. The CONNECT community is passionate and committed – more than 180 people attended this code-a-thon. Here are a couple of the topics the community discussed:
- Understanding how organizations use CONNECT. Given that CONNECT enables standards-based, secure exchange of health information, the code-a-thon included participants who use CONNECT to support their own missions. Dr. Louis Rubenson from the National Disaster Medical System (NDMS) participated in a panel of federal partners who shared how they use CONNECT today and what they'd like to be able to use CONNECT for in the future. NDMS assists in the state and local response to disasters. They use CONNECT to enable patient information to follow a patient throughout the disaster response, from emergency treatment centers to evacuation. Dr. Rubenson identified a need for a slimmed down version of CONNECT, able to operate efficiently over phone lines in a resource constrained environment, and encouraged the community to make it happen. In another code-a-thon session, Calvin Beebe from the Mayo Clinic Southeast Minnesota Beacon Community, which helps improve outcomes for adults with diabetes and children with asthma, shared how his organization uses CONNECT. Like many of the participants, he expressed a desire to see an increased focus on performance and scalability as CONNECT continues to mature. This was discussed in several other sessions, in fact....
- Performance and scalability. This topic was one of the more active discussion groups during the code-a-thon, as many of the participants have started deploying CONNECT to their production servers. We see this is great news in the evolution of a product and the Nationwide Health Information Network (NwHIN) – so many exchanges are occurring that there’s an increased focus on performance and scalability. During one of the break out sessions, several ideas were thrown around to help CONNECT scale to handle the expected onslaught of requests as the NwHIN continues to grow. The easiest and probably the most obvious answer that one can apply to a system is to improve the hardware the gateway is installed on. CONNECT uses a significant amount of computing resources to handle the security checks and message orchestration, and it should be deployed on fairly strong hardware, but like any app, its performance can obviously be improved by beefing up the hardware to exceed minimum requirements. An out-of-the-box proposal was the idea of fronting a CONNECT server with a SSL accelerator and have that box handle the processor intensive SSL transactions, and allow the underlying layers underneath the accelerator could be switched to use only unsecured web services to communicate among components. Another approach that was talked about in detail is to simply load balance the workload to multiple instances of CONNECT by having a software or hardware switch in front of the server farm. Aside from optimizing resource allocation, this has the side benefit of increasing the reliability of the system. For a more software-oriented solution, the group agreed that the biggest performance improvement is to upgrade to the latest CONNECT release – recent refactoring improved the software’s performance. ONC and the FHA have requested that the CONNECT team focus on improving both scalability and performance, so the next scheduled releases will carry additional improvements.
- Open source strategy. CONNECT is aiming to become one of the first truly open-source, community-driven products initially developed by the government. This is no small task, as the government is typically big on control and not so fond of unexpected changes. The good folks at Red Hat gave a fantastic talk about a vision for CONNECT's future that sparked discussion on community rights vs. control over a product. Especially when the product is in the health IT space, knowing that good people are working on the code is important. However, engaging the community in an open source product necessitates allowance for their contributions in any form. How do you resolve these differences? We talked at length about the distinctions, and consensus began to circle around the idea of having a group of vetted, core contributors (who are more than the contracted development team) with permissions to review and commit code, leaving all other registered users with the ability to submit issues, contribute code, attach files to issues, and comment where they feel they can contribute. The final path to a true open-source has yet to be finalized, but the discussion is open for community contributions!!!
The Johns Hopkins University Applied Physics Laboratory was a fabulous setting for the dynamic event, and we were inspired and challenged by those that we met. Please comment here if you have thoughts or other insights about the code-a-thon, CONNECT, and health information exchange in general.
(For deeper information, see the Federal Health Archiecture, which manages CONNECT in concert with the U.S. Department of Health and Human Services' Office of the National Coordinator for Health IT (ONC). Find out more about CONNECT and download source at www.connectopensource.org - and find CONNECT on Twitter using @connect_project and the code-a-thon hashtag (#CONNECTcodeathon). Find out about 5AM's work on CONNECT, our work on the Nationwide Health Information Network, and our health IT team, which also contributed to this blog post: Brian Humphrey, Mike Hunter, Arthur Kong, Zach Melnick, James Rachlin, and Andrew Sy.)
Recently here on this blog, we discussed the issue of the growing racial divide in personalized medicine – and how it has the potential to leave non-whites behind as we move into a promising new age in medicine.
Since then, on a positive note, 23andMe has launched an initiative, called Roots Into the Future, giving away their service for free to 10,000 self-identified African-Americans. It’s a step in the right direction. Though with over 100,000 people already in their database, that’s still only 10% of their total. Of course there are already African-Americans in their customer base, but the fact that they feel the need to offer this initiative is indicative of the industry-wide divide and the general need for more diverse DNA in the study pool.
This week, the Washington Post published an article that adds yet another wrinkle. Federal patent examiners at the US Patent and Trademark office have been rejecting patents for molecular diagnostics (a mainstay of personalized medicine) that have not been validated across a variety of racial and ethnic groups. This is controversial partly because the ideas of “race” and “ethnicity” are not hard genetic concepts – a better breakdown would be along the lines of haplogroups, but even here we have the sticky problem of ancestry admixture and cultural identification. For instance, Latinos, a very large and diverse group, can come from a range of admixed ancestries, spanning Europe, Africa and native American haplogroups. The label “Latino” is not in and of itself particularly meaningful or specific enough when it comes to genetic markers. While the USPTO’s intentions are good (to make sure we don’t think that tests derived from genetic information heavily skewed to one population are necessarily valid for everyone) there is clearly more work to be done before we can claim to judge these tests fairly. And on the flipside, we’re going to need more initiatives like 23andMe’s to make sure the database we’re working from isn’t skewed in the first place.
It is important to note too that getting non-whites to participate in DNA testing is not always as simple as it sounds, especially outside of the developed world. The US Government’s use of DNA spying in the Osama Bin Laden case and some famously bungled efforts to work with the DNA of native populations have sown mistrust and bad blood that will be difficult to clear. But if we’re to bring the promise of personalized therapy to everyone, we will need to find ways to move past these differences and establish ethical guidelines that strike a balance between being open for constructive science while being respectful of heritage and tradition. We'll also need to get the regulators up to speed on what does and doesn't constitute race and ethnicity in a genomic world.