Machine Learning Used to Predict Lab Results

Contributor

At academic medical centers, start-ups and established technology companies, the application of machine learning models toward analyzing healthcare data is an active and growing enterprise.

Historically, much of the focus has been on image interpretation (i.e., for radiological studies, pathology slides or retinal images), although machine learning can also be applied to large datasets of structured, non-image based data from the electronic health record.

Led by Dr. Jonathan Chen, Assistant Professor of Medicine and Biomedical Informatics at Stanford, and Dr. Song Xu, a former post-doctoral researcher at Stanford, a collaborative and multi-disciplinary group of researchers from UCSF, University of Michigan and Stanford recently completed a study to predict the likelihood of lab abnormality prior to lab ordering, across a wide range of common tests.

The full results are now published in JAMA Network Open.

I had the opportunity to e-interview Dr. Nader Najafi, Associate Professor of Medicine at UCSF, who was the lead researcher from UCSF.

Q: Why does this study matter? Is there a problem with lab over-utilization, and what are the negative effects?

A: Lab over-utilization is a significant issue. While the cost of each of the top 20 labs sent in the hospital may not be that high, the volume of ordering makes up for the low individual cost. Reducing that cost is important for hospital budgeting. In addition, hospital care often translates into many blood draws each day. The pain of being stuck with a needle is a real burden to the patient's experience and warrants alleviation.

Q: What surprised you about the results of this study?

A: While we had theorized it would happen, it was still a bit surprising that the accuracy of the model trained at Stanford noticeably dropped when tested at the other medical centers. This is a good reminder that machine learning models not only learn patterns in patients and diseases but also the patterns of clinical care that come through in the data on which they train. A Stanford-trained algorithm will logically have its most success for Stanford patients.

Q: What do you think this study means for clinical practice?

Most clinical decision support in the EHR nowadays is low-complexity and rule-based. The future of decision support is high-dimensional, patient-personalized data being input to complex models that were learned with machine learning algorithms. This will yield more accurate recommendations that can aid physicians. For example, when I am near the end of my day on a hospital service, I order labs for my patient panel for the next morning. This process takes me a good deal of time because I am careful to think through what tests I truly need and what tests are unlikely to yield information I don't already know. The algorithm highlighted in this study by Xu et al. is a good example of a tool that could help me make those decisions by letting me know where the high-yield information is likely to be.

Q: What are the implications of this study for patients?

Hopefully less blood draws! Nobody enjoys getting a needle poke so I sympathize with my patients who have to get them frequently. I am looking for ways to reduce that burden while ensuring that patients still receive high-quality care.

Q: What do you think the future/next steps will be?

Implementing a machine learning algorithm that will be used by physicians at the point of care is not as simple as putting the model coefficients into an EHR module and pressing "go". It is complicated and involves inputting the right variables, balancing the risks and benefits of different decision thresholds, identifying the ideal user interface for interacting with the model's output and recommendations, and conveying the right information to the right physician at the right time. This next step is challenging and crucial.

Q: What was it like working on a data science project across multiple medical centers?

Very rewarding! It was great working with such a talented Stanford team. Song helped me understand and configure the data pipeline and Jonathan gave me great tips when the data processing code was slowing my computer to a crawl. It was also great to be a part of validating a machine learning algorithm at multiple sites, something that I believe should be routinely performed for healthcare predictive analytics.