Up at 5AM: The 5AM Solutions Blog

Three good reasons why health information exchange is worth the trouble

Posted on Thu, Feb 24, 2011 @ 01:21 PM


Health information exchange, or HIE, has been in the news a lot this week, with many players in health IT gathering at the annual HIMSS (Health Information Management System Society)conference, where sessions formal and informal covered the numerous benefits and complications related to the exchange of health information.

 


    1. Because starting simple is still starting. And, it’s pretty simple. Deep in the bowels of a Continuity of Care Document (CCD), we can run into serious interoperability pain. One system uses SNOMED, and another uses a home-grown system and syntax. Another system puts the name of the immunization, rather than the product name, into the Free Text Product Name of the Immunization Module (this is actually happening among our clients). The more we get into actual machine-to-machine exchange, the more we uncover vagueness in the specs, and implementation realities that are at odds. But before we get to the really hard stuff, let’s start simply. A waterfall approach doesn’t work in software development, and it sure won’t work in HIE. The Nationwide Health Information Network, which 5AM works to support, has really sophisticated and bleeding-edge exchanges going on every day. Many of the partners are dealing with the really hard exchange and integration details. But they rightly took their time getting there. The first step was getting information from one organization to the next, in a “good enough” format so that a stylesheet could render it on a clinician’s screen. This is the first step – can a clinician get information, and make sense of it? We can, and will, have to tackle the hard stuff as we progress. But get started - let grey matter do its work, just get the information out to the person who needs it. (See: NHIN Direct, White House’s Aneesh Chopra on building blocks, google et al)


    1. Because meaningful use benefits will “help” fund the cost – and MU is right on target. 5AM has supported the Office of the National Coordinator for Health IT (ONC) for several years, and we witnessed the careful work the government put into shaping the MU regulations and strategy. Despite its critics, meaningful use certainly gets us on the path (see above). Why should anyone oppose e-prescriptions, allergy lists, or providing patients with an electronic medical record (see more)? The cynic in me thinks that everyone from Dell to Pizza Hut is suddenly creating a health IT business unit just so they can dip into/exploit providers for the incentive money that MU will open up. While that may be true, everyone will benefit from the meaningful use of EHR. Full EMRs and solutions loaded with bells and whistles aren’t affordable or practical for many groups (especially individual practices and small hospitals). This is where the nimble players can make a real difference, by providing straightforward solutions that enable meaningful use to let the information – and incentive dollars – flow to the people who need it.


    1. Don’t take my word for it. Please. Check out the various presentations from HIMSS this week, especially Mark Anderson’s compelling presentation on how he set up a Health Information Exchange (HIE) in Texas. He started simple (see #1), and kept track of the benefits, which include:

        • 73% reduction in unnecessary tests;

        • 87% faster delivery of lab results;

        • 80% reduction in paper exchanges between the participating hospitals and doctor’s offices;

        • 78% reduction in medical errors;

        • Immense reduction in ER visits and costs;


      …and the list goes on and on. Truly. Check out his slides here.


Please suggest more reasons. I’ll tick up the number of “good reasons” as you add comments with more. (Since it’s obvious I’m a true believer that information should and can flow, it’s unlikely I’ll write a blog citing the three good reasons why HIE is not worth the trouble, but I welcome your arguments from that perspective too!)

Read More

Can a comic book explain health reform and health IT?

Posted on Mon, Feb 21, 2011 @ 01:20 PM

Imagine our surprise when we read a couple of weeks ago that Jonathan Gruber, the MIT economics professor who helped compose President Obama’s health care bill, is writing a comic book to describe health care reform. It makes a lot of sense, to use a simple format to explain something complex. (Info here.)

 

But we were especially surprised because we had a comic book of our own in the works. Today, we’re happy to launch The Glassbox 5 comic series, featuring our first hero, the Data Wrangler. He’s a cowboy
who can help make sense of all the 0101ATGC data that floods the minds and systems of those in science and medicine. Data “wrangling” is part of what 5AM does for our clients, as we help them find – and share – the information amidst the mighty flow of data.

Now, we’re not claiming that the work 5AM does – using technology to unite people and health information – is anywhere near as complex as the 2,400-page health care reform bill. We find that our clients are successful because we keep it simple from the get-go – by listening to what their needs are, working together to determine the right solution, then doing it and sharing it quickly and continuously to ensure it’s right. (That last bit is what we call the Glassbox development process – see here for details.) Each of our clients has a different and unique problem, but with big ears for listening, and a consistent process for delivery and feedback, we feel we're pretty good at getting it right (100% of our software is in use today).

Check out the Data Wrangler and his fellow Glassbox 5 comic heroes, and share your thoughts. Are you drowning in data? Let us know what you do to keep from going under. As HIMSS heats up today, consider whether a comic book work for this kind of stuff, or do we stay in reality and leave the comics to the Escapists...?

(I'll get off the marketing soapbox now. Kudos and thanks to our great artists, Rich Ellis and Lee Moyer - check them out.)

 

Read More

The Bubble, Genetics Exceptionalism and How Genomics Needs to Get Out More

Posted on Thu, Feb 17, 2011 @ 01:19 PM

Last Friday I attended a symposium at the NIH in Bethesda, Maryland (USA) that commemorated the 10 year anniversary of the draft publication of the human genome. It was a great event from start to finish with a strong lineup of speakers (and which is now available on-line). I'll summarize some of them at the end of this post (and be sure to check out the twitter traffic, too), but the last two talks were the ones that hit me the hardest.
The first was Amy Harmon from the New York Times. She started off with 10 minutes of self-deprecation about how this was her first powerpoint presentation and how she was the only speaker on the agenda with no advanced academic degree (this was not quite true, at least one of the personal genomics panel members was a lay-person, too). She has been writing articles about people who've been directly affected by genetics tests, including a woman who had prophylactic mastectomy because of a positive BRCA1 test result and another woman who had a positive genetic test for Huntington's disease, for which there is no cure. Harmon's main plea was that she didn't think scientists spent enough time communicating their results to the general public. She gave examples of not easily being able to get in touch with scientists because they were overly focused on their research and publishing in scientific journals.
She also gave a somewhat scathing take on pay-to-read scientific journals, including showing this article which, somewhat comically, requires a subscription to read. She said it was frustrating that taxpayers could not read the fruits of their scientific funding without paying again for the privilege. Now I take her example article as pretty indefensible (given that it was an article about making data available that could only be read with a paid subscription), but to be fair to the NIH and other funding agencies, I do believe that PubMed Central is set up to be a repository of free articles and is intended to be a place where government-funded research publications are available for free, although I'm pretty sure not all NIH-funded research is available there yet. But that aside, her statements about lack of communication between scientists and the public did give me a little pause.
Her talk did set the stage nicely for the last talk of the day, by Maynard Olson. He began by talking about how during the Human Genome Project the genome came to be seen as a concrete thing to focus those efforts on, as opposed to the rest of biology, which is more complex and nuanced. But now, due in part to the ability to look at many, many genomes using new technologies, he claimed that it was now looking more and more like the rest of biology. This makes sense given what we now know about the complexity of human DNA variation and the myriad of things encoded in our DNA other than protein-coding genes. And the prevailing view that genome data is only part of a much more complicated picture that includes transcripts, proteins and metabolites, among others. He referred to breaking out of a 'bubble' of simplicity to begin to incorporate all this information into a whole.
But there's even more to it than that. He put up a slide with an article title that referred to 'Genetic Exceptionalism'. Now I have to admit I'd never heard this phrase before, or at least had not internalized what it meant. It refers to the view the genetic data is somehow fundamentally different than any other kind of health and medical data. For instance, consider if your genotype data somehow differs from your blood pressure, your family history of disease, your diet, or how much time you spent in the sun as a child. There are plenty of opinions on this concept and I don't want to spend too much time on it here. I guess my first reaction is that a big difference between genetic data and most other kinds of medical and health data is that genetics can be used to uniquely identify individuals. But in most other ways it really is no different, and I guess you could argue that with enough medical record and personal data you could easily identify people, too. I think his point was that the field of genomics has been suffering from this attitude by looking inward at genomic data and not outward to try to integrate it with other biomedical data. Clearly there are efforts to try to do this but they are really just getting started.
Companies like 23andMe are clearly being exceptionalist in that they only report genetic data. To understand how silly this is, think about how they give you a lifetime relative risk for lung cancer when they don't even know whether you are a smoker or not. In fact, the NHGRI itself is similarly exceptionalist; why not make genetic research part each disease-focused institute? This clearly is happening to some extent, especially in cancer, but I do wonder how long the NHGRI can be realistically seen as a viable independent entity.
He closed with a call for a 'new message' to move the field forward and talked about a more integrated message that included genome data in its rightful place in the larger collection of biomedical data. But he also circled back to Amy Harmon's point that this message needs to be not just directed at scientists and physicians, but at the general public.
I think this is critical, too. In an earlier post I talked about my 23andMe results. Since then I have mentioned it to 3 different physicians and none of them had ever heard of 23andMe. I was at a Biotechnology Meetup last week with about 20 people from the local biotech community, and nobody there, except a colleague of mine, was a 23andMe customer. I think it's very easy for me, since I've been in this field for over 15 years, to forget that the vast majority of the public doesn't really know very much about genetics, and what they do know might well be skewed or downright incorrect.
Harmon and Olson were making the point that this is a serious issue and that outreach from the scientific community is an obvious way to improve the situation. I think the risk is that patients, doctors, lawmakers, venture capitalists and regulators, among others, with incomplete knowledge of genomics will overwhelm an educated and well-intentioned scientific community. If that happens, effective laws and funding could suffer.
My small part in this is to offer myself as a speaker or educator to anyone who wants to know more about how genomics is being used in medicine. I'd be happy to talk to 3rd graders, seniors, or anyone else in between. I can show people my 23andMe results, talk about what a genome-wide association study is, and explain the structure of DNA. I'll be happy to go anywhere, although if it's outside of the Washington, DC (USA) area, you'll have to be realistic about scheduling and feasibility. I can be reached at wfitzhugh@5amsolutions.com.
There was also a lot more interesting stuff earlier in the day. Eric Green, the director the National Human Genome Research Institute (NHGRI), and NIH Director Francis Collins started things off with talks about the past 10 years and the future of genomic research. Eric Lander followed with his usual passionate and articulate genomics summary talk, looking backward and to the future. Other speakers included Sean Eddy, Rick Lifton, and a panel discussion of personal genomics that included James Watson and Misha Angrist.
Since I worked on the draft of the human genome it was a great time to remember all that hard work and the payoff. I do think that the role of Celera in lighting a fire under the public Human Genome Project might have been acknowledged, at least in passing, but that is a minor quibble.
The entire symposium was videocast on the web and, as I said, is now available as a recording. On an self-centered note, 5AM Solutions' SNPTipstool got a shout-out by Sharon Terry of the Genetic Alliance (238:30 in the video) and I asked the panel a question about where personal genomic data will be stored in the future (256:00). The answer was, lots of places, and with there soon to be lots more genome data out there, this clearly needs to be addressed. But that's another blog post.
Read More

I'm Done... Except For the Tests

Posted on Thu, Feb 10, 2011 @ 01:19 PM

I never want to hear these words in relation to a coding task. When someone says this, my internal translation is "I have no idea how complete my task is, and can't show what is and is not working for the time I have already spent." Tests are how you know you are done, not the other way around. All too often, these words are followed later by an explanation of why getting to "done" took longer than anticipated. As Ken Beck said, "Any program feature without an automated test simply doesn't exist."

Engineers talk in those terms because (to them) work has been done, problems have been overcome, and progress has been made. Everyone agrees that tests are needed. This isn't about the definition of "done." It's an honest self-assessment, meant to answer where a particular bit of code is at. From my perspective, though, that statement demonstrates a problem with the engineer's personal software process. Good PSP dictates breaking down tasks into more discrete, sequential subtasks. An engineer should be able to answer with very high precision the problem they are currently working on. "I'm done except for the tests" indicates the task breakdown isn't happening, or that there's a disconnect between how the engineer broke the task down mentally versus the actual work process. Consequences include reduced reliability of the self-assessment, and increased schedule risk.

Getting rid of "done except for the tests" requires changes from two sides. Engineers need to reduce their increment of reportable progress and report on this new, more granular level. TDD's red/green/refactor loop, among its many other benefits, forces a good breakdown into discrete subtasks. More frequent commits with good CI checks are another excellent strategy. Project managers or CSMs should actively encourage measurable progress against an engineer's breakdown of a task. The special case of being done except for the tests should be dealt with aggressively, because it shows a breakdown of personal software process and adds risk to delivery.
Read More

Next Generation Sequencing Gets Agile: Notes From AGBT

Posted on Wed, Feb 09, 2011 @ 01:18 PM

I had the pleasure of attending my first Advances in Genome Biology and Technology (AGBT) meeting last week on Marco Island, Florida. This blog post’s title is how I, as a bioinformatician, would sum up the meeting although other candidate titles might be “the sequencing platform I bought last week is already yesterday’s donuts”, “to tweet or not to tweet” or “I narrowly escaped the snow storm and now my colleagues back home hate me”.

So what do I mean by Agile next generation sequencing (NGS)? First, note the big-“A” – that’s intentional as I’m referring to the Agile software development process. This is something near and dear to our hearts here at 5AM and if you haven’t heard of it we've got some great material on our web site. In a nutshell, Agile is a lightweight development method favoring frequent deliveries to the customer, quick response to change driven by motivated and often small teams of stakeholders. Although Agile is a software development methodology, it shouldn’t be much of a stretch to think of the development of molecular biology workflows in the same way. My view of Agile science is a small team of scientists and technicians, forming a hypothesis, conducting an experiment and using the outcomes to direct the next experiment in a transparent and nimble fashion.

Until recently, NGS has not been Agile. If you look at the last few years of sequencing platform development, the main driver bringing us sequencing data faster and cheaper has been the ability to massively multiplex sequencing reactions. The sequencing process in technologies like SOLiD or Illumina (Solexa) actually run slowly – taking sometimes two weeks to finish – but since millions of sequencing reactions are occurring at once the data throughput of these instruments is immense. The library preparation and clean up processes are also lengthy ranging from days to even weeks. Add in the fact that a single run on one of these very expensive instruments will run you several thousand dollars and you end up with long iterations preceded by lengthy planning and followed by even lengthier analyses.

At last year’s AGBT, Pacific Biosciences and Ion Torrent both introduced us to technologies that could transform NGS application development into an Agile process. At that time, though, the instruments were better at generating press releases than sequence data in a customer’s lab. With multiple installations of PacBio RS and Ion Torrent PGMs now in the field, this year’s AGBT marked the first time real data was presented from customers of these “Third Generation” sequencers. Short run times (now hours rather than days), easy prep and cheap reagent costs all make Agile science a possibility. Adding to the agility theme, Illumina announced a competing instrument, the MiSeq, and posters and talks were peppered with new, faster sample preparation systems.

So what can we expect from Agile NGS? For software, Agile development promises shorter time to working software, adaptability to changing requirements and sustained development at a constant pace. With a more agile sequencing platform, I believe we can expect much the same: shorter time to scientific results, the ability to adapt experiments to changing realities in the lab and new findings and smoother workflows for non-production (i.e. genome center) settings. Agile NGS is much better suited for developing diagnostics and new assay technologies and fits more naturally with the process of scientific discovery. Agile NGS will also place pressure on the informatics/analysis side to maintain the same agility lest it become a bottleneck. That’s a challenge we’re looking forward to here at 5AM and we'd love to talk to others who are facing the same issues.
Read More

Here's one way to securely exchange health information

Posted on Thu, Feb 03, 2011 @ 01:18 PM

Yesterday, one of our clients, the Office of the National Coordinator for Health IT (ONC), announced the pilot launch of its newest project, the Direct Project (http://wiki.directproject.org/). The project represents an innovative, simple approach to enabling the exchange of health information.

"This is an important milestone in our journey to achieve secure health information exchange, and it means that health care providers large and small will have an early option for electronic exchange of information supporting their most basic and frequently-needed uses," said Dr. David Blumenthal, the National Coordinator for Health Information Technology. 

The Direct Project brought together public and private organizations to collaborate on defining some specifications to enable health information exchange. ONC’s open-armed approach allowed state and federal agencies, and companies and healthcare providers large and small, to actually work together for, well, the greater good.  Each participant had a stake in the game, and was motivated to see it realized. All of us in health IT know the pain of being able to exchange information on a one-to-one rather than one-to-many basis. We regularly suffer the pain of mapping data from one system to another. As more groups engage in HIE, the multiple-mapping problem is only going to get worse, so we’re all motivated to come to a “good enough” common agreement about how it should be done.

"This is a new approach to public sector leadership, and it works," said Aneesh Chopra, the United States Chief Technology Officer.  "Instead of depending on a traditional top-down approach, stakeholders worked together to develop an open, standardized platform that dramatically lowers costs and barriers to secure health information exchange. The Direct Project is a great example of how government can work as a convener to catalyze new ideas and business models through collaboration."

The two initial pilot programs really will test out whether the Direct specifications will work for information exchange...
  • Minnesota’s Hennepin County Medical Center (HCMC) is using Direct to send immunization records to the Minnesota Department of Health (MDH).
  • The Rhode Island Quality Institute (RIQI) will use Direct in two ways. First, it will use Direct to send patient information from provider to provider. Second, it will use Direct to securely feed clinical information, with patient consent from practice-based EHRs to the state-wide HIE, currentcare. (I have to admit I particularly like this one, as provides a way to connect Direct information with the Nationwide Health Information Network, which 5AM supports.)
And there are several other pilot projects in the works, in Tennessee, New York, Connecticut, Oklahoma, California, and South Texas. Cool stuff.

What is the Direct Project, though? The program’s own language is visionary:

The Direct Project was launched in March 2010 as a part of the Nationwide Health Information Network, to specify a simple, secure, scalable, standards-based way for participants to send authenticated, encrypted health information directly to known, trusted recipients over the Internet in support of Stage 1 Meaningful Use requirements.  Participants include EHR and PHR vendors, medical organizations, systems integrators, integrated delivery networks, federal organizations, state and regional health information organizations, organizations that provide health information exchange capabilities, and health information technology consultants.

Information transfers supported by Direct Project specifications address core needs, including standardized exchange of laboratory results; physician-to-physician transfers of summary patient records; transmission of data from physicians to hospitals for patient admission; transmission of hospital discharge data back to physicians; and transmission of information to public health agencies.  In addition to representing most-needed information transfers for clinicians and hospitals, these information exchange capabilities will also support providers in meeting "meaningful use" objectives established last year by HHS, and will thus support providers in qualifying for Medicare and Medicaid incentive payments in their use of electronic health records.

Lofty goals. But the mechanism to enable this health information exchange really couldn’t be simpler: SMTP, S/MIME, and X.509. The way I like to explain it (and I’m happy to be refuted by those more closely involved with the Direct Project) is that it’s a secure electronic fax (yes, that is an oxymoron). You have an address of the person or entity who should receive info, and you send it to them. I know there are proponents of more sophisticated, presumably more “secure” ways to send info from one person to another, but this group of distinguished and invested collaborators were able to devise a solution by invoking “keep it simple, stupid” methodology. I can’t fault them for that. Their resulting specification/tech stack is simple, secure, scalable, and standards-based - just what they set out for it to be.

Will the Direct Project change the world, by opening up a way to securely exchange health information as easily as sending a fax or an email? Let’s let the pilot projects prove that out. I’m amazed that the “get them all in a room and see what happens” collaboration model actually yielded a simplest workable solution. We can only fret about how hard these issues are for so long. It’s time to do something already – I’m glad to see that happening.

More info on ONC can be found here: http://www.healthit.gov/. Info on the Direct Project can be found here: http://wiki.directproject.org/. And info on ONC’s latest collaboration project, which seeks "volunteers to collaborate on interoperability challenges critical to meeting Meaningful Use objectives for 2011" by TOMORROW, is here. (I’ll write more on that one later).
Read More

GET OUR BLOG IN YOUR INBOX

Diagnostic Tests on the Map of Biomedicine

MoBsmCover

Download the ebook based on our popular blog series. This free, 50+ page edition features updated, expanded posts and redesigned, easier-to-read maps. 

FREE Biobanking Ebook

Biobanking Free Ebook
Get this 29 page PDF document on how data science can be used to advance biorepositories.

 Free NGS Whitepaper

NGS White Paper for Molecular Diagnostics

Learn about the applications, opportunities and challenges in this updated free white paper. 

Recent Posts