If you have ever looked behind the scenes of a radiology department, you know the truth that nobody advertises. Every clinic runs on a fragile mix of schedulers, imaging systems, dictation tools and half forgotten integrations that only work because someone once begged them to. Our customer set out to fix that reality. They are building a commercial radiology platform that finally brings scheduling, imaging, reporting and real time collaboration into one clean product experience.

In our previous two posts we explored the product side of that vision. First we showed the unified clinical workflow that brings order to radiologists’ daily routines. Then we revealed the adaptive HL7 layer that survives the endless variations of how clinics format and customize their messages.
Now it is time to talk about the engine room. This post takes you inside the AWS backend that orchestrates microservices, handles HL7 messages through a secure pipeline, keeps worklists fresh for dozens of simultaneous users and stays calm even when exam volume jumps without warning.
If your team builds digital health products or manages clinical integrations, the next chapters give you a practical blueprint for designing an architecture that stays stable under pressure, adapts to new customers fast and keeps sensitive medical data fully protected.
A quick look at the product behind this AWS backend
The platform behind this architecture is not a simple viewer or another reporting tool. It is a cloud native radiology command center that sits on top of the systems groups already use, including PACS for image storage, RIS for scheduling and exam management, dictation tools, reporting systems and other clinical applications that keep a radiology practice running. It brings all of them together in one clean cockpit for the team’s daily work. Radiologists see one intelligent worklist instead of dozens of windows, they get clinical context in one place and they can move through their day without fighting the tools.

On the surface it looks like a polished web application where exams are prioritized, workloads are balanced across the group and quality workflows such as peer review and follow up run in the background. Under the hood it depends on a constant stream of HL7 messages, live status updates and tight coordination between many moving parts.
The AWS architecture behind it is designed to make that possible, which is why we focus the rest of this post on how it is structured and how its pieces work together.
How we built an AWS architecture for healthcare
The easiest way to think about this architecture is as a set of clear layers that each solve a specific problem. Users at the top, application logic in the middle and data at the bottom, all wrapped in security and observability.

User access and static frontend
From the user’s point of view the platform is just a fast web app. Route 53 and CloudFront simply make sure that happens. The UI is a static single page application stored in S3 and delivered through CloudFront, so radiologists get quick load times even when they work from different locations.
The interesting part is how the team hides the usual rough edges of static apps. Two small functions help CloudFront keep navigation smooth. Lambda@Edge runs at the edge and handles lightweight checks before requests even reach the origin. A regular Lambda function works behind CloudFront and catches outdated or broken URLs. If someone refreshes a deep link or follows an old bookmark, they do not see a 404 page, they land back in a valid view of the application. In a busy radiology workflow this matters more than any DNS detail.
Secure HL7 entry channel
Web users and clinical systems never share the same door. HL7 messages from hospital systems travel through a site to site VPN into the private side of the VPC. They never touch the public web tier.
This VPN path is dedicated to confidential clinical data. It delivers HL7 straight to the services that validate and normalize it, which means the platform can accept live exam updates from different sites without exposing endpoints on the internet.
Application routing with load balancing
Once traffic is inside the environment, an Application Load Balancer (ALB) becomes the main entry point for the backend. CloudFront forwards API calls to the ALB, HL7 driven processes call internal endpoints, and the ALB routes each request to the right target group.
Target groups track which Amazon Elastic Container Service (ECS) tasks are healthy at any moment. When a new version of a service is deployed, tasks are added and drained automatically, so the platform can roll out updates while radiologists keep reading. The routing logic stays simple, which makes the system easier to operate under real clinical load.
Microservices and messaging inside Amazon ECS
The core of the platform runs as a set of containerized services in an ECS cluster inside private subnets. Each service owns a clear piece of behavior, for example:
- handling login and session checks
- building and updating worklists
- processing and parsing HL7 messages
- managing notifications and background tasks
These services talk to each other over the internal network and use NATS as a message broker for fast event based communication. When an HL7 message changes the status of an exam, that update flows through NATS to the services that care about worklists and UI updates. This keeps the system responsive without turning the codebase into one large monolith.
Because everything runs in ECS, the team can scale individual services independently and keep noisy workloads, such as HL7 bursts, from affecting user facing endpoints.
Data storage with Amazon Aurora and RDS
Under the application layer sits the data tier. The main business and workflow data lives in an Aurora cluster. This is where the platform stores exams, states, configuration and everything else that defines what radiologists see on screen.
Authentication and identity information is kept in a separate relational instance. Splitting these concerns makes access control cleaner and reduces the blast radius of any issue in one of the databases. Both engines are managed services, so failover and backups are handled by AWS instead of custom scripts.
Network security and outbound access
All application services live in private subnets and have no direct internet exposure. When they need outbound access, for example to pull container images from Amazon Elastic Container Registry (ECR) during deployment, they go through a NAT gateway. From the outside nothing can reach those services through that path, which is exactly what you want in a system that processes protected health information.
Subnets are spread across multiple availability zones. If one zone has trouble, ECS can continue to run tasks in the others and the ALB simply forwards traffic to the healthy targets.
Monitoring and operations with Amazon CloudWatch
Finally, the entire stack reports logs and metrics into CloudWatch. This gives operations and product teams one place to watch HL7 flows, API behavior and service health.
What this architecture delivers for the business
This AWS foundation gives the platform exactly what a commercial healthcare product needs. It stays available even during zone failures or deployments, so radiologists never lose their worklists. It scales cleanly as new clinics join or message volumes spike, which means the company can grow without redesigning its backend.
Sensitive protected health information (PHI) stays secure thanks to private networking, isolated services and a dedicated VPN path for HL7. And because the system handles each clinic’s unique HL7 flavor without special engineering work, onboarding new customers becomes a predictable process.
In short, this architecture turns the product from a single site solution into a repeatable, secure and market ready platform.
Let’s talk about your healthcare architecture
If you are working on your own digital health product or preparing to integrate clinical systems, we are always happy to compare notes. Our team at ABCloudz builds platforms that handle complex data flows, survive messy integrations and stay reliable when real clinicians depend on them.
If you want to design an architecture that scales cleanly, protects sensitive data and keeps your team focused on product features instead of infrastructure fires, reach out to us. We would love to hear what you are building next.