When you deploy a proxy, load balancer, API gateway, or ingress in front of services, you’re making a decision that is far more than technical:
Where does TLS actually end?
That single choice defines:
Most discussions frame this as:
TLS Termination vs TLS Passthrough
But in real systems, there’s a third pattern that appears constantly:
TLS termination at the proxy, then plaintext internally
Let’s walk through all three — and, more importantly, the thinking process behind choosing.
The proxy does not decrypt traffic.
Client → Backend = one TLS session
The proxy forwards encrypted bytes. It operates mostly at Layer 4 (TCP).
This is a secure pipe, not an application-aware component.
The proxy decrypts TLS from the client.
Now you have two legs:
Proxy → Backend:
The proxy now operates at Layer 7 and understands the application protocol.
It’s no longer “just networking” — it becomes part of your application platform.
This decision is not “performance vs security.”
It is:
Do you want your proxy to be a transport device or an application control point?
| Role of Proxy | You Should Lean Toward |
|---|---|
| Network plumbing | TLS Passthrough |
| Security + routing brain | TLS Termination |
Once TLS ends at the proxy, it can see HTTP/gRPC/etc. That unlocks major capabilities.
/api, /admin)Passthrough cannot do this.
With passthrough, the proxy is blind to attacks at the HTTP layer.
Termination enables:
Passthrough gives you TCP metrics, not application visibility.
Termination turns your proxy into a smart application-aware edge.
It now sees decrypted:
Compromise the proxy, and you expose everything.
You now manage:
Your proxy is now part of your security infrastructure, not just networking.
Only the client and service can see the data.
The proxy never sees plaintext.
“Traffic is encrypted in transit” is easier when no middlebox decrypts it.
With mTLS, the backend sees the real client cert — not a proxy-asserted identity.
| Capability | Passthrough |
|---|---|
| Path-based routing | ❌ |
| WAF | ❌ |
| JWT validation at edge | ❌ |
| HTTP metrics | ❌ |
| API gateway behavior | ❌ |
Your proxy becomes secure, but blind.
This is extremely common:
Client → Proxy: TLS Proxy → Backend: HTTP
It’s simple. Backends don’t need certs. Historically it improved performance.
But this model moves your trust boundary.
You’ve now said:
“Inside the network, traffic doesn’t need encryption.”
That assumption is increasingly unsafe.
If an attacker compromises:
They may read internal traffic:
TLS protected you from the internet — not from the breach after entry.
Backends often trust proxy headers:
X-Forwarded-ForX-Forwarded-ProtoIf services are reachable internally via HTTP, an attacker can call them directly and spoof headers unless tightly restricted.
Modern security assumes:
The internal network is hostile.
Plaintext internal traffic contradicts that assumption.
Auditors increasingly ask:
“Is traffic encrypted between internal services?”
“It’s internal” is no longer a strong answer.
Only with strong context:
Even then, it’s a risk acceptance, not best practice.
Most modern systems choose:
Client → Proxy: TLS Proxy → Backend: TLS
You get:
Proxy → Backend uses mutual TLS. Backend only accepts trusted clients.
This protects against:
| Requirement | Best Fit |
|---|---|
| L7 routing/security/observability | TLS Termination |
| Strict end-to-end confidentiality | TLS Passthrough |
| Modern secure platform | Terminate + Re-encrypt (often mTLS) |
| Legacy/simple internal network | Terminate + Plaintext (with risk) |
TLS termination gives you power and visibility. TLS passthrough gives you isolation and simplicity.
Termination + plaintext internally gives you convenience, but weakens internal security and should be justified, not assumed.
If the data would be a major incident if leaked, don’t let it travel unencrypted — even inside your network.
Your TLS architecture is really your trust architecture.