How to Improve EHR Interoperability: Steps & Standards

Improving EHR interoperability requires work across multiple layers: adopting modern data exchange standards, normalizing clinical terminology, connecting to national exchange networks, and aligning organizational policies. No single fix solves it. The challenge is both technical and cultural, and the most effective strategies tackle both simultaneously.

Understanding the Four Levels of Interoperability

Before choosing where to invest, it helps to know which layer of interoperability you’re actually trying to fix. HIMSS defines four levels, and most organizations have gaps at more than one.

  • Foundational (Level 1): One system can receive data from another. This is basic connectivity, mostly an IT infrastructure problem.
  • Structural (Level 2): The format of the data is preserved so the receiving system can parse it correctly. Think of this as agreeing on the envelope and packaging.
  • Semantic (Level 3): The meaning and context of the data stay intact. A lab result coded one way in your system means the same thing when it arrives in another.
  • Organizational (Level 4): Governance, legal agreements, consent policies, and integrated workflows that allow institutions to actually trust and use each other’s data.

Most health systems have solved Level 1. The persistent problems live at Levels 2 through 4, where data arrives but can’t be understood, or where policy and trust gaps prevent exchange from happening at all.

Adopt FHIR as Your Exchange Standard

FHIR (Fast Healthcare Interoperability Resources), published by HL7, is now the dominant standard for health data exchange. It uses a resource-based API model, meaning clinical concepts like patients, allergies, procedures, and care plans each get their own standardized data structure. Systems communicate over standard web technologies (REST APIs) using familiar formats like JSON and XML.

FHIR is organized into layers of increasing clinical complexity. The base layer handles data types and extensions. Above that sits the exchange layer with REST API and search functionality. Higher layers define resources for administration (patients, practitioners, organizations, devices), clinical data (allergies, problems, procedures, care plans, risk assessments), and clinical reasoning (care guidelines, quality measures). This layered design means you can start with basic data exchange and build toward more sophisticated clinical workflows over time.

If your organization is still relying heavily on older HL7 v2 message feeds or custom interfaces, migrating toward FHIR-based APIs is the single highest-impact technical step you can take. It doesn’t eliminate integration work, but it standardizes the surface area so every new connection doesn’t require a bespoke build.

Use SMART on FHIR for Third-Party Apps

FHIR handles the data format. SMART on FHIR handles the application layer, giving third-party apps a secure, standardized way to plug into your EHR. The framework uses OAuth 2.0 authorization, meaning apps request specific permissions, users approve them, and the app gets a scoped access token to read or write FHIR resources.

The practical workflow has two launch modes. In a standalone launch, the user opens the app outside the EHR (from a phone or browser) and the app connects to the FHIR endpoint independently. In an EHR launch, the clinician opens the app from within their EHR session, and the system passes context automatically so the app already knows which patient record is active. Either way, the app discovers the EHR’s capabilities through a standard configuration endpoint, obtains an authorization code, exchanges it for an access token, and then makes FHIR API calls using that token.

This matters for interoperability because it means specialty apps, population health tools, patient-facing portals, and clinical decision support can all integrate with any SMART-enabled EHR through the same mechanism. You’re not locked into a single vendor’s app ecosystem.

Standardize Clinical Terminology

Structural interoperability (getting data to arrive in a readable format) is necessary but not sufficient. Semantic interoperability, where the receiving system actually understands what the data means, depends on consistent clinical terminology. This is where many organizations struggle most.

Two coding systems carry most of the weight. LOINC (Logical Observation Identifiers Names and Codes) standardizes lab tests, vital signs, and clinical observations. SNOMED CT standardizes clinical terms for diagnoses, procedures, problems, and interventions. In practice, mapping existing data to these systems requires deliberate effort. A recent pilot study mapping nursing flowsheet data found that 65.5% of source concepts could be successfully mapped to SNOMED CT and LOINC using current terminology coverage. That’s meaningful progress, but it also means about a third of clinical concepts still lack clean mappings.

Established guidelines prioritize LOINC for observation values and measurement codes, while SNOMED CT takes priority for clinical concepts like patient problems and nursing interventions. When both terminologies offer equivalent target codes, SNOMED is preferred for concepts and LOINC for values. Following these conventions keeps your data consistent with what other organizations expect to receive.

The practical step here is investing in terminology governance: a team or process that maps your local codes to standard terminologies, maintains those mappings as standards evolve, and ensures new data elements get coded correctly from the start. Common data models like OMOP integrate multiple standardized terminologies, making it possible to represent a broader range of healthcare data in a single consistent structure.

Connect to National Exchange Networks

Individual point-to-point connections don’t scale. National frameworks exist to solve this, and the most significant is TEFCA (Trusted Exchange Framework and Common Agreement), managed by ONC.

TEFCA works through Qualified Health Information Networks, or QHINs. These are the central connection points in what’s designed as a “network of networks” for nationwide health information exchange. QHINs use TEFCA’s technical standards to share electronic health information across the country, handling patient identity resolution, authentication, and performance measurement. When your organization connects to a QHIN, you gain the ability to exchange data with any other participant across any other QHIN in the network.

Becoming a QHIN itself is a significant undertaking. It requires being a U.S. entity, completing an application and onboarding process, and signing the Common Agreement. The process typically takes about 12 months. But most health systems won’t become QHINs directly. Instead, you’ll participate through an existing QHIN, which dramatically lowers the barrier to entry while still giving you access to the national exchange infrastructure. If your organization isn’t already connected to a QHIN participant, evaluating your options here should be a near-term priority.

The Payoff: Reduced Duplicate Testing

Interoperability improvements produce measurable returns. One of the clearest is the reduction in duplicate testing. Research published in MIS Quarterly found that when providers shared health information electronically across organizations, duplicate radiology testing dropped substantially. Providers sharing radiology reports across different healthcare organizations saw a 33.5% greater reduction in duplication rates compared to laboratory tests exchanged the same way. Even across all provider types, information sharing was associated with a 13% reduction in duplicate test rates.

Radiology benefits more because imaging is expensive and the results are highly portable. When a clinician can pull up a recent CT scan from another facility, there’s a clear reason to skip reordering it. Lab tests, while also reduced, have shorter shelf lives clinically, so some re-testing is appropriate. Still, the overall pattern is clear: when systems can actually exchange and display each other’s data, redundant work drops.

Meet Evolving Regulatory Requirements

Interoperability isn’t just a technical choice anymore. ONC’s HTI-1 final rule sets concrete requirements for certified health IT, which supports care delivered by more than 96% of hospitals and 78% of office-based physicians in the U.S.

Two requirements stand out. First, the rule adopts USCDI Version 3 as the new baseline data standard within the ONC certification program, effective January 1, 2026. USCDI defines the minimum set of data classes and elements that certified systems must be able to exchange. Moving from v1 to v3 expands what data must flow between systems, covering more clinical and social determinant data elements. EHR developers can adopt v3 ahead of the deadline, and organizations should be asking their vendors about timelines now.

Second, HTI-1 establishes first-of-its-kind transparency requirements for AI and predictive algorithms embedded in certified health IT. Developers must provide clinical users with a consistent baseline set of information about the algorithms they use, enabling assessment for fairness, validity, effectiveness, and safety. If your interoperability strategy includes clinical decision support tools or predictive models that pull data from multiple sources, these transparency requirements apply.

Tackle the Non-Technical Barriers

The hardest interoperability problems aren’t technical. Inconsistent use of healthcare terminology across institutions creates interpretation challenges even when data flows freely. Organizations may use different local terms for the same clinical concept, and standardizing that terminology is a sustained effort, not a one-time project.

Financial barriers are real. Conforming to new interoperability standards often requires system upgrades, new software, staff training, and ongoing maintenance. Some organizations have avoided adopting standards like FHIR specifically because of the implementation costs involved. Scalability is another concern: as the number of participants and data volume grow, maintaining system performance becomes increasingly expensive.

Framework dependency creates friction too. Many interoperability solutions are built around specific data models or EHR systems, requiring further customization when applied in different environments. The more heterogeneous your technology landscape, the more adaptation work each new standard requires.

Addressing these barriers means budgeting for interoperability as an ongoing operational cost rather than a one-time capital project. It means governance structures that keep terminology consistent over time. And it means choosing standards and networks with the broadest adoption, because every proprietary choice narrows the pool of systems you can exchange data with easily.

Patient Access as an Interoperability Layer

Patients themselves are becoming active nodes in the interoperability ecosystem. Patient portals that provide immediate access to lab results, radiology reports, pathology reports, and clinical notes give individuals the ability to carry their own health information between providers. Federal systems have demonstrated this approach through health information exchanges that link EHR data across public and private sector providers, with patient permission, enabling visibility into care transitions and proactive management of gaps in care.

For organizations looking to improve interoperability, enabling robust patient-mediated exchange is a practical step that doesn’t require waiting for institution-to-institution agreements. When patients can download, share, or authorize transfer of their records through standardized APIs, they bridge gaps that organizational interoperability hasn’t solved yet.