Skip to content

System overview

A high-level view of the Clinloop runtime — the dataflow from a facility posting a shift to a worker accepting it on mobile.

flowchart LR
subgraph Client
F[Facility portal]
A[Agency portal]
AD[Admin dashboard]
M[Mobile app]
end
subgraph Edge
CF[Cloudflare]
end
subgraph AWS["AWS · ca-central-1"]
ALB[ALB]
API[NestJS API · ECS Fargate]
PG[(Postgres · RDS Multi-AZ)]
R[(Redis · ElastiCache)]
S3[(S3 · documents)]
SES[(SES · email)]
end
subgraph External
TW[Twilio SMS]
CR[Certn background checks]
EXP[Expo Push]
end
F & A & AD & M --> CF --> ALB --> API
API <--> PG
API <--> R
API --> S3
API --> SES
API --> TW
API --> CR
API --> EXP

Highlights

  • Two-tier booking — preferred agency pool gets a head start, then marketplace opens. Race conditions handled by a Redis distributed lock in BookingService.
  • Background work — verification, push fan-out, and webhook intake all flow through BullMQ queues backed by ElastiCache Redis.
  • Audit trail — every PHI read/write or auth event writes to audit_events. The table is RLS-restricted to append-only at the DB level. See Audit / Overview.
  • Data residency — every AWS resource lives in ca-central-1 to satisfy Ontario’s PHIPA requirements.

This page is a placeholder — full architecture writeup lands in Phase 3 of the docs build (see docs/docs-site-plan.md).