Ronen Lavi
Soapbox

Moving Mountains (of Data): How Clinical AI Is Empowering Primary Care

By Ronen Lavi
Ronen Lavi

Even AI models trained on general medical literature will have difficulty making sense of the nuances specific to primary care, which is full of unique jargon, abbreviations and other idiosyncrasies. As always, the proverbial devil is in the details. Any AI solution worth its salt must be fluent in the specific idioms of the field and empower clinicians to deliver the best care that they can.

Any primary care physician will tell you that the data challenge is real. If you believe the hype, access to more patient data than ever before means that today’s clinicians have a full view of their patients’ health at their fingertips. But the reality often resembles searching for needles in a giant haystack of irrelevant information.

To complicate things further, much patient data is scattered across scanned documents, free-form notes and other unstructured sources. Identifying the relevant data points and piecing them together is a forensic effort that demands time and expertise.

But physicians have enough on their plate already. Their burnout rates are breaking records, with administrative factors such as requirements from payers playing an important role. Now they are expected to manually comb through patient data from electronic health records (EHRs), health information exchange networks (HIEs), wearable devices and many other disparate sources.

Enter Artificial Intelligence (AI). At least in theory, the latest AI tools can deploy machine learning and natural language processing (NLP) algorithms to navigate and process copious amounts of patient data, synthesize information, identify crucial patterns and present a comprehensive view of a patient’s health.

Here too, the hype can be misleading. While cutting-edge AI holds tremendous potential to revolutionize healthcare, it is not the panacea that will instantly solve the challenges facing physicians. Unique disciplines such as primary care come with their own complexities for AI solutions. And above all, clinicians’ unique responsibilities towards their patients’ health and well-being impose special demands on AI solutions in the healthcare domain, namely the need to ensure clinicians’ trust.

Given these limitations, how can we create AI solutions that will tackle the challenges clinicians face, and empower them to deliver the best care that they can?

From Data to Insight

The challenge of making sense of large amounts of fragmented, chaotic data so that physicians can spend less time combing through records and more time with their patients is one that I wake up to in the morning and go to bed with at night.

If we want to help physicians scale the Mount Everest of primary care data—or better yet, shrink it into something far more manageable—we must first understand what that entails. It isn’t just a matter of taking general large language models (LLMs) and letting them loose on primary care records. Those models are great at generating text. However, providing insights and supporting clinical decisions based on up-to-date data requires entirely different models. Even AI models trained on general medical literature will have difficulty making sense of the nuances specific to primary care, which is full of unique jargon, abbreviations and other idiosyncrasies. As always, the proverbial devil is in the details. Any AI solution worth its salt must be fluent in the specific idioms of the field.

Tackling primary care documentation also requires dealing with a set of phenomena that many people find surprising. Here are some of our findings after years of developing AI that can help clinicians make sense of primary care documents:

  • 30% of documents are not relevant to the provider for the clinical encounter
  • 60% of documents come with improper identification (e.g. mistitled, disorganized)
  • 25% of documents have wrong or no dates

The problem isn’t just the messy data. There can be real clinical ramifications when data is taken at face value without being verified. For example, one common issue we’ve observed is that documents in the EHR will often show the upload date in the “date” field rather than the relevant clinical date. But what if a colonoscopy record uploaded in 2023 describes a procedure done in 2012? In order to be truly indispensable, AI tools need to go beyond the metadata, dive deep into the documentation and surface the full and accurate details.

To overcome these obstacles, we’ve developed hundreds of tailor-made, domain-specific AI models, informed by the nuanced understanding and experience of our on-staff physicians, that read primary care documents with a deep understanding of their clinical contexts. These models work in tandem to classify and label documents correctly and extract and structure the relevant clinical information, providing clinicians with patient summaries and actionable clinical insights.

If depth is one advantage of AI with excellent domain expertise, breadth is another. We’ve found that going beyond the documents immediately available in the EHR, to include documents from HIEs, claims and payor data, and other sources, can help uncover even more suspected diagnoses. This process of coalescing multi-source data into a single coherent source of truth, called data reconciliation, results in a far more robust resource for physicians—and better outcomes for their patients.

The potential of innovative, expert-level AI solutions to transform disparate data points into helpful insights is nothing less than a game-changer. But to truly deliver on that promise, and help reduce the burden on clinicians, their appreciation of AI’s brilliance is far less important than their being able to trust it.

Thinking Outside the (Black) Box

AI can look like magic, even to the select few who truly understand what’s going on beneath the hood. Given just a short prompt, AI tools can create stunning imagery, compose pop songs or pass the New York Bar Exam. But it is math, not magic, that powers these algorithms, and what may seem wondrous in many fields can be quite sobering when it comes to health care.

AI’s ability to “hallucinate”—to make things up—is what grants it a veneer of creativity. But as many high school students have discovered to their dismay, those hallucinations can lead to unreliable results. And for physicians who treat patients, the risk is far graver than flunking a homework assignment.

For AI to make a meaningful contribution to health care, its output must be transparent and utterly demystified. It is true that many doctors can make intuitive decisions in the blink of an eye, but those are informed by years of learning and experience. In the sacred realm of physicians’ interactions with their patients, AI tools cannot function as a decision-making black box. They should make suggestions, backed up by clinical evidence. For example, when an AI tool makes a suggestion such as a potential diagnosis or required screening, it should link back to the relevant clinical documentation for the physician to review, augmenting and empowering the physician’s judgment, rather than engaging in the folly of trying to replace her.

We see this first-hand every day: The successful solutions are the ones that dive deep to deliver valuable clinical insights that might otherwise have gone overlooked, and go the extra mile in order to guarantee physicians’ trust. Only AI that is fluent in the clinical language, copes with the specific challenges endemic to the field and acts as a responsible assistant rather than a black box, will succeed in reducing both missed diagnoses—and the burden on physicians.

About The Author

Ronen Lavi