If you ask clinicians what slows down post-acute care, they usually blame paperwork or a portal that refuses to load at the worst possible moment. If you ask the engineers, they point at a maze of VPNs, legacy APIs, and integrations held together by hope and good intentions.

One of our customers set out to fix that. Their team is building a post-acute care platform that helps hospitals hand off patients to home health agencies without the usual chaos. Doctors sign orders digitally, agencies see real updates, and nobody has to chase a missing PDF for two days.

It feels simple on the surface, but behind the scenes the platform has to deal with a harder reality. It processes sensitive documents, ingests HL7 from multiple partners, runs event-based workflows, and stays compliant with HIPAA and SOC2 while clinicians use it in real time.

This is where the AWS story gets interesting. In this post we will walk through the backend architecture that keeps this product running: how CloudWatch continuously collects signals that feed autoscaling decisions for ECS workloads, how containerized services and Lambda functions share responsibilities, how we rely on RDS and Redis for data and speed and how security tools like WAF, KMS, CloudTrail and Secrets Manager keep auditors relaxed instead of worried.

If you are building your own digital health product, planning a post-acute care platform, or just trying to design an AWS backend that will not fall over the first time traffic spikes, this post is for you.

What this platform actually does

Before we jump into the AWS layer cake, here is the short version of what the platform actually does in the day to day chaos of post-acute care. Hospitals use it to hand off patients to home health agencies, doctors sign orders digitally, and care teams get the information they need without chasing email threads or outdated forms.

Under the hood the platform coordinates a lot more than documents. It ingests HL7 messages from different partners, runs event driven workflows that notify agencies the moment something changes, tracks real time status updates clinicians rely on, supports secure communication across teams and enforces access rules that have to behave correctly under HIPAA. It also processes clinical documents through a secure signing flow and makes the final results available through a protected API.

All of this creates real pressure on the backend. It needs private networking for clinical data, fast HL7 ingestion, durable storage for patient and workflow records, caching for speed, container workloads that grow when traffic jumps and monitoring that shows what is happening instead of adding noise.

In short, the platform needed an AWS foundation that feels calm under load, scales without drama and keeps sensitive data safe at all times. The next chapter breaks down how we designed exactly that.

The AWS backend that keeps everything moving

When you open the architecture behind this post-acute care platform, you do not find a single giant service doing everything. What you find instead is a calm, layered system where each part has one job and does it well. Here is how the whole thing fits together.

Click or tap the image to view it in full size

User access and the public edge

Everything starts the moment a clinician loads the web app. Route 53 points them to CloudFront, which serves the static UI straight from S3. This combination gives the frontend the sort of speed you want in healthcare, where nobody has time for a spinning loader.

A WAF sits in front of CloudFront and blocks the usual parade of bots, scanners and strange requests that every healthcare API attracts the moment it goes live. It is a quiet hero in this story, removing noise before it reaches anything sensitive.

The VPC and routing layer

Once traffic passes the edge, it lands inside a VPC that looks clean and predictable. Public subnets handle the load balancers. Private subnets handle everything that touches PHI (Protected Health Information).

Two Application Load Balancers route requests to frontend and backend services running in Elastic Container Service (ECS). This separation keeps deployments smooth and helps the team roll out updates without interrupting clinicians in the middle of their workflow.

When ECS tasks need to pull new container images from Elastic Container Registry (ECR), they go through a NAT gateway. Outbound only, nothing inbound. Simple security best practice, enforced by design.

Containers, Lambdas and event driven workflows

The real magic happens in the compute layer. ECS with Fargate runs the frontend and backend services along with internal components that coordinate workflows, handle business logic and process incoming events. Fargate makes scaling feel automatic because the team never touches servers or thinks about capacity.

Next to it lives a small fleet of Lambda functions that take care of tasks better suited for functions than containers. Some process clinical documents, some validate HL7 payloads, and some monitor CloudWatch logs and metrics to help drive the CloudWatch alarms that power availability checks and autoscaling decisions. These Lambdas act like helpers around the main services, picking up tasks that should not slow down the core application.

Simple Queue Service (SQS) and EventBridge complete the loop. HL7 messages arriving from partners flow through the private VPN and can land in a queue for safe, asynchronous processing. EventBridge coordinates events between services without forcing tight coupling. The result is a system that reacts to changes instantly but stays stable under load spikes.

Data storage and caching

The data layer stays intentionally minimal. An encrypted RDS instance in a Multi-AZ setup holds patient and workflow records. A Multi-AZ Redis cluster provides fast lookups for data the application needs constantly.

Both are wrapped in security groups and backed by Key Management Service (KMS) encryption. It is not flashy, but it is the kind of configuration that survives audits, traffic bursts and unexpected growth without rewriting anything.

Secure clinical integrations

Partners connect through a site-to-site VPN that delivers confidential HL7 messages directly into the private side of the VPC. There is no public HL7 endpoint. There is no shared door between web traffic and clinical systems. This closed tunnel is one of the reasons the platform clears HIPAA checks with fewer headaches.

Observability, governance and secrets

Everything that happens in this environment leaves a trail. CloudWatch collects logs, metrics and signals that inform scaling decisions or alert the team when something needs attention. CloudTrail records every sensitive action in the account for SOC2 and HIPAA reviews.

Secrets Manager stores credentials for databases, signing workflows and external integrations. Systems Manager keeps configuration consistent across the whole platform. This is the part of the architecture nobody celebrates but everyone depends on.

How the layers work together

Viewed as one system, the architecture is a set of calm, independent layers: a fast and protected edge, a predictable routing core, elastic container workloads, flexible Lambda helpers, secure data storage and tightly controlled networking for clinical integrations.

The end result is simple for users and reliable for engineering teams. The platform stays fast for clinicians, safe for patient data and stable even on days when every hospital decides to discharge half their patients before lunch.

What this AWS foundation delivers for the business

This backend gives the customer a stable base they can actually run a post-acute operation on.

  • Consistent workflows for partner clinics. HL7, signing and document flows behave the same way regardless of who the partner is, which turns onboarding into a repeatable process instead of a custom project.
  • Lower operational risk. Clear boundaries between services, private networking and predictable autoscaling reduce the number of issues that normally surface during high volume days.
  • Transparent system behavior. With CloudWatch, CloudTrail and structured event flows, the team sees what is happening inside the platform without digging through logs or guessing why something slowed down.
  • A backend that does not need to be rebuilt. The structure supports new partners, new message volumes and new workflow variations without redesigning the core system.

In short, the architecture replaces a fragile handoff process with a platform the customer can grow, support and trust in real clinical use.

Let’s talk about your healthcare architecture

If you are working on a digital health product or trying to design an AWS backend that stays reliable under real clinical load, our team at ABCloudz is always happy to compare notes. We build systems that survive messy integrations, keep sensitive data protected and stay fast when people on the clinical side depend on them.

If you are planning your next steps and want to discuss your architecture or integration strategy, reach out to us. Tell us about your goals and we will help you turn them into a solid technical plan.

Ready to start the conversation?