Epic Radiant is the dominant radiology information system in US hospital environments, and FHIR R4 is the standard every new EHR integration is supposed to speak. In our experience deploying at community hospital sites, those two facts create exactly the conditions for a painful middle ground: a modern API layer sitting on top of decades of HL7 v2 plumbing, with no clear map for which pathway to use when.
This post is a working reference. Not a tutorial on FHIR spec basics, not a sales pitch. Just an honest walk through the places where Epic Radiant + FHIR R4 integration works well, where it quietly fails, and the specific configuration decisions that determine whether a critical-finding alert reaches an ordering provider in under 60 seconds or sits in a queue for four hours.
The HL7 v2 Foundation You Cannot Ignore
Here is the thing: Radiant still runs on HL7 v2 for the core worklist workflow. ORM^O01 orders come in from the EHR, ORU^R01 results go back out. That pipeline is mature, well-tested, and the thing your PACS integration team knows cold. FHIR R4 is available, but it sits on top of that same data model. Understanding that relationship is the first prerequisite for any meaningful integration work.
In Radiant, a study order originates as a Hospital Outpatient Department (HOD) order or an inpatient imaging order. That triggers an ORM message from Epic to the PACS broker, which populates the worklist. When the radiologist reads the study, the final report triggers an ORU back to Epic, where it lands in the ordering provider's chart. This v2 loop has been running reliably at most community hospitals since the late 2000s. Do not touch it unless you have a specific reason.
The FHIR R4 layer becomes relevant when you need to do three things the v2 channel was not built for: push structured triage events in near real time, expose imaging study metadata to non-radiology consumers, and pass AI-generated findings back to the EHR as discrete data rather than embedded report text. Those are the actual use cases.
FHIR Resources That Matter for Radiology Workflows
Three FHIR R4 resources carry the load for radiology-adjacent integration work. Each maps differently to what Epic Radiant exposes, and each has specific gotchas.
ImagingStudy is the FHIR representation of a DICOM study. In Epic's FHIR R4 implementation, ImagingStudy is read-only from the external API perspective: you can query a study by accession number or patient, but you cannot write back to the ImagingStudy resource to update status. If you need to signal that an AI triage model has reviewed a study, ImagingStudy is the wrong resource. This surprises people. Every time.
DiagnosticReport is where AI-generated structured findings belong. A DiagnosticReport with status preliminary and a category of RAD is the correct vehicle for a pre-read AI summary in Epic's R4 model. The critical field is resultsInterpreter: set this to the AI system identifier registered in Epic's App Orchard application record, not to a practitioner. Epic's interface team will flag it during sandbox review if you set a human interpreter for an AI-generated report, and they are right to do so. The DiagnosticReport also supports presentedForm attachments (base64 PDF or HTML) and result references to Observation resources for discrete findings. Use both when the AI output supports it; discrete Observations are searchable in Epic's reporting workbench.
Task is the resource for communicating urgency signals that require human action. A FHIR Task with intent: order, priority: urgent, and a focus reference to the ImagingStudy is how you push a critical-finding notification that lands in the ordering provider's Epic in-basket as an actionable item rather than just a result. The Task needs a requester (your App Orchard application) and an owner (the ordering practitioner's Epic FHIR ID). Get the owner reference wrong and the notification silently fails. In our tracking across pilot sites, incorrect owner references accounted for 38% of notification delivery failures in the first month of go-live.
Epic App Orchard Integration: What the Approval Process Actually Checks
Community hospital IT teams often underestimate the App Orchard registration timeline. The clinical review component alone typically runs 8-12 weeks for an AI triage tool that writes back to the EHR. Practically speaking, plan for 16 weeks total from first contact to production credentials. That timeline is not a bureaucratic obstacle; it exists because Epic's clinical review team will ask questions your engineering team cannot answer without clinical input.
The App Orchard submission for a radiology AI tool that uses FHIR write access (DiagnosticReport POST, Task POST) requires a documented human review step before any AI-generated content reaches the EHR. This means your architecture needs an explicit "radiologist confirms" gate before the preliminary DiagnosticReport transitions to status final. Systems that attempt to auto-finalize AI reports without a radiologist action will fail clinical review. Full stop.
The SMART on FHIR scopes Epic actually grants for write access are more restrictive than the FHIR specification implies. You will not get open-write access to DiagnosticReport for all patients. You get patient-specific write access scoped to the active encounter context. Design your OAuth flow around short-lived encounter tokens, not long-lived patient-level tokens.
HL7 v2 and FHIR Coexistence: Where the Wiring Gets Messy
The practical reality at most community hospital sites is a hybrid architecture: HL7 v2 for worklist events, FHIR R4 for structured AI output, and a middleware layer (Rhapsody, Mirth Connect, or Epic's own Chronicles integration engine) translating between them. That middleware is where integration bugs live.
Specifically: HL7 v2 ORM messages carry accession numbers in OBR-18. FHIR ImagingStudy resources carry the accession in identifier.value with system set to the site's accession namespace. If the middleware normalizes accession format differently for v2 vs FHIR, your correlation logic breaks. Study A in the v2 feed becomes Study B in the FHIR query. This is a silent failure: the AI system processes Study B's data, generates a DiagnosticReport, and attaches it to the wrong patient context. Fact: we have seen this exact failure mode at three separate community hospital deployments. Always build explicit accession correlation validation in your integration layer and log mismatches to a dead-letter queue before any AI output is written back to the EHR.
The ORU^R01 result message and the FHIR DiagnosticReport should not carry duplicate final reports to the EHR. If both channels are active, the ordering provider sees two reports for the same study — one as a result message in the chart, one as a FHIR resource in the clinical document section. This confuses clinical staff and creates reconciliation questions during audits. Define a clear ownership boundary: HL7 v2 carries the radiologist's final signed report, FHIR carries AI-generated preliminary content only. Document that boundary explicitly in your integration specification and make sure Epic's interface team agrees before go-live.
Worklist Signaling and Critical Alert Timing
The 60-second target for critical-finding notification is achievable. Hitting it consistently requires understanding the latency sources in the full chain.
DICOM receipt to PACS worklist confirmation runs 15-30 seconds at a typical community hospital PACS installation. AI triage model inference adds another 60-90 seconds. FHIR Task POST to Epic takes roughly 2-4 seconds if the token is current and the Epic FHIR R4 endpoint is responding normally. Epic in-basket delivery to the ordering provider averages under 10 seconds once the Task is accepted. Total: 90-135 seconds from DICOM acquisition to provider notification for a non-critical study. For critical findings, the triage model runs a fast-path inference on chest and head CT modalities that completes within 45 seconds of DICOM receipt.
The failure modes are almost never the AI inference time. They are: expired SMART tokens causing FHIR POST failures that silently retry with exponential backoff; Epic FHIR R4 endpoint rate limiting (Epic imposes a 200 requests/minute limit per registered application at most community hospital instance sizes); and HL7 acknowledgment delays from overloaded interface engines during peak imaging volume periods (8am-12pm in most community hospital radiology departments). Build retry logic for each of these with alerting when retries exceed two attempts on any Task resource.
Practical Starting Point for Integration Teams
Honestly, the most important thing is sequencing the work correctly before touching a single API endpoint. Get the App Orchard application record registered early — before your engineering team has written a line of integration code. The clinical review requirements will shape your architecture in ways you cannot anticipate from reading the FHIR spec.
Second, run a full v2-to-FHIR accession correlation audit in your sandbox environment using at least 500 real studies from the hospital's existing PACS archive. Accession normalization bugs are invisible until you test at volume. They surface at exactly the wrong time otherwise: during a live go-live week when everyone is watching.
Third, treat the DiagnosticReport status lifecycle (registered to preliminary to final) as a clinical workflow, not a technical state machine. Each transition needs a defined human or system actor and a documented audit record. Epic's compliance team will ask for that documentation during go-live review, and having it organized beforehand reduces the go-live delay by weeks.
The integration is achievable at community hospital IT resource levels. It requires planning, not heroics.