Stateless Layered System – what it is, why it matters, and how to build one
1. Core idea
Every request carries everything the server needs; the server forgets it afterwards.
Any instance of any layer can finish the work—no “memory” between calls.
This single rule is applied recursively to every layer (presentation, API, service, bus, cache, DB) so the whole stack becomes:
- Horizontally elastic – add/remove boxes at any tier without affinity.
- Resilient – crash a box, traffic just goes to the next one.
- Deploy-friendly – zero-downtime rolling upgrades, blue/green, canary.
2. Stateless ≠ “no state anywhere”
State exists, but it is externalised to dedicated, often replicated, stores:
| Where the state lives | Examples |
|---|---|
| Shared database | PostgreSQL, MySQL, Firestore, DynamoDB |
| Distributed cache | Redis, Memcached, Aerospike |
| Object / blob store | S3, GCS, Azure Blob |
| Client | JWT, cookie, local-storage, mobile key-chain |
All application tiers treat these stores as black-box services; they never keep local copies longer than the lifetime of one request.
3. Layered view of a stateless system
┌-------------┐
│ Browser │ ← state in cookie / JWT / local-storage
└-----┬-------┘
│ HTTPS
┌-----┴-------┐
│ CDN/Edge │ ← purely static cache, no origin session
└-----┬-------┘
│
┌-----┴-------┐
│ API GW │ ← validates token, adds headers, no session table
└-----┬-------┘
│
┌-----┴-------┐
│ Service A │ ← stateless container; per-request token → user id
└-----┬-------┘
│
┌-----┴-------┐
│ Service B │ ← ditto; can call A or DB; still stateless
└-----┬-------┘
│
┌-----┴-------┐
│ DB / Cache │ ← ONLY place that remembers
└-------------┘
Each arrow is an independent, self-descriptive HTTP/gRPC call carrying auth, correlation-id, retry-token, etc. Any box can be replaced or multiplied without informing the others.
4. Design check-list to stay stateless
- Identify hidden state
- In-memory maps, static variables, singletons, open WebSocket rooms, file handles.
- Externalise it
- Move to DB, cache, blob, or push to client.
- Make APIs idempotent
PUT /users/123with same payload always yields same result → safe to retry.
- Use tokens, not sessions
- JWT, PASETO, OAuth2 access tokens; include expiry & scopes.
- Keep messages self-descriptive
- Content-Type, cache headers, idempotency-key, retry-after.
- Let layers be opaque
- Client can’t tell whether it hit CDN, API-GW, or origin; intermediaries can be added freely.
5. Pay-offs you get
| Benefit | How stateless layered design delivers |
|---|---|
| Horizontal scale | Add identical containers; LB in round-robin—no sticky sessions. |
| Rolling upgrades | Replace entire tier; inflight requests retry on new pods. |
| Fault-tolerance | Instance dies → traffic rerouted; no data lost because none was inside. |
| Geo-distribution | Deploy same container in multiple regions; share global DB/cache. |
| Simpler reasoning | Each request has one-path logic; debug by looking at single log line. |
6. Trade-offs to watch
| Issue | Mitigation |
|---|---|
| Larger per-request payload (token, headers) | HTTP/2 header compression; binary tokens. |
| Repeated DB/cache lookups | Edge caching, Bloom filters, materialised views. |
| Chatty interactions | GraphQL, gRPC streaming, batch endpoints. |
| Sequence-sensitive workflows | Saga pattern, workflow engine (Temporal, Camunda) keeps state in its own store; app services remain stateless. |
7. Quick recipe: Angular + Firebase already stateless by default
- Angular keeps auth token in memory/local-storage; every call carries it.
- Firebase Hosting → Google global CDN (stateless edge).
- Cloud Functions/Firestore autoscale; functions themselves are stateless.
- No sticky sessions, no server memory, nothing to replicate—just add more regions.
=> You are already living the stateless layered dream.
Bottom line:
A stateless layered system is nothing more (and nothing less) than applying the “no memory between requests” rule at every layer, externalising whatever state you still need to specialised, often already-existing services. Do that once and your architecture becomes a rubber-band: stretch it, break pieces, replace them—users never notice.
Layered System
“Layered System” is one of the six REST architectural constraints.
In plain language it says:
“A client must not be able to tell whether it is talking to the end server or to an intermediary (proxy, load-balancer, cache, API-gateway, SSL-terminator, etc.). Each layer only sees the layer immediately below it and has no knowledge of any other layers.”
How that maps to your Node-Express-MySQL-Angular stack
-
You already have the layers
Angular (client) ⇄ Express (API) ⇄ MySQL
…but you can (and usually should) insert extra transparent layers in-between without changing a line of browser or server code. -
Typical transparent layers you get “for free” when you obey the rule
- Reverse proxy (Nginx, Caddy, Traefik, Cloudflare)
– terminates TLS, serves static files, compresses, rate-limits. - Load balancer / API-gateway (NGINX, HAProxy, AWS ELB, Ambassador, Kong)
– routes/api/usersto user-service,/api/ordersto order-service, adds auth headers, etc. - Caching layer (Redis, CDN, Varnish, CloudFront)
– returns304 Not Modifiedor cached JSON without hitting Express. - WAF / DDoS filter
– rejects malicious traffic before it reaches Node.
- Reverse proxy (Nginx, Caddy, Traefik, Cloudflare)
-
What you must do in Express so the constraint stays satisfied
- Stateless requests: no hidden server-side session state.
– send JWT or opaque token inAuthorizationheader; keep identity in the token, not in MemoryStore. - Idempotent semantics:
GET /api/products/7always means the same, regardless of how many proxies are in the middle. - Correct cache headers:
so any cache layer can store or re-validate.
res.set({
'Cache-Control': 'private, max-age=60',
'ETag': '"33a64df551"',
'Vary': 'Accept-Encoding'
}); - Relative URLs inside JSON (or absolute URLs built from the
Hostheader) so the same payload works behindhttps://api.example.comorhttps://localhost:3000. - Don’t trust the TCP source IP for security; read
X-Forwarded-Foronly after you whitelist which proxies you allow to send it (app.set('trust proxy', 'loopback')).
- Stateless requests: no hidden server-side session state.
-
Angular (client) rules
- Only the uniform interface (HTTP verbs + standard status codes) is used; no assumption that “my server lives on port 3000”.
- All URLs come from
environment.tsso you can switch fromlocalhost:3000tohttps://api.example.comwithout recompiling logic. - CORS is configured once at the edge (Nginx or gateway) instead of baking CORS origins into the SPA.
-
MySQL visibility
- The database is never exposed to the client; it is one more hidden layer behind Express.
- Connection pooling, read-replicas, or a separate analytics replica can be swapped in without the client noticing.
Benefits you gain by keeping the constraint
- Horizontal scale: spin up ten Express containers behind a load-balancer—no client change.
- Zero-downtime deploy: blue/green, rolling updates, or canary releases behind the gateway.
- Security: attackers hit the edge proxy; your Node process never sees their IP.
- Performance: CDN serves static Angular build and caches public API responses.
Quick checklist for your project
☑ Express is stateless (JWT, not MemoryStore).
☑ Cache & CORS headers are explicit.
☑ All links/URLs are either relative or derived from Host header.
☑ You run Nginx (or similar) in front of Node, even on localhost.
☑ You read X-Forwarded-* headers only after setting trust proxy.
Keep those habits and any proxy, gateway, cache, or load-balancer you add later will be completely transparent to both your Angular client and your Express server—exactly what the “Layered System” constraint demands.