Article Outline
- What SSL termination actually means
- Why one TLS hop is good and two TLS hops are often better
- What Cloudflare is doing in front of a custom server
- Why "Flexible" mode is the wrong mental model for production
- Why your origin server should still speak HTTPS
- How real client IP restoration works behind a proxy
- Why direct-to-origin bypass is a real risk
- How app frameworks behave behind a trusted proxy
- How Nginx should be configured when Cloudflare sits in front
- Why staging and private environments should be gated differently
- A generic implementation example
- What this architecture solves and what it does not solve
The Simple Version
If you run your own server instead of a platform like Vercel, putting Cloudflare in front of Nginx is one of the cleanest ways to improve your deployment.
It gives you:
- a safer public edge
- DDoS and bot filtering before traffic reaches your server
- TLS at the CDN edge
- optional TLS again between Cloudflare and your own server
- a cleaner place to enforce security and traffic policy
That sounds like a lot, but the core idea is simple:
Cloudflare protects the front door. Nginx protects the hallway behind it. Your application lives further inside the building.
That layered model is the real value.
What SSL Termination Actually Means
"Termination" just means "this encrypted connection ends here."
Think of TLS like a sealed envelope.
When a browser connects to a server over HTTPS, the message travels in that sealed envelope. At some point, something has to open it so the request can be routed.
That opening point is where TLS terminates.
If a browser connects straight to Nginx, then Nginx is the termination point:
browser -> HTTPS -> nginx -> HTTP -> application
That is normal.
In private internal networks, plain HTTP after the proxy is often acceptable because the traffic never leaves infrastructure you control. The important thing is understanding where encryption ends and why.
Why Two TLS Hops Are Often Better
Once you put Cloudflare in front of your origin server, there are now two separate connections:
browser -> HTTPS -> Cloudflare -> HTTPS -> origin nginx
That means there are two termination points:
- the browser-to-Cloudflare connection
- the Cloudflare-to-origin connection
This is sometimes called double TLS termination, dual TLS hops, or end-to-origin encryption.
The main benefit is not buzzword value. The main benefit is that traffic is encrypted on the public internet all the way to your own server, not just to the CDN.
Without that second TLS hop, the proxy might receive HTTPS from the browser and then forward plain HTTP to your server. That is weaker and usually not what you want in production.
What Cloudflare Is Actually Doing
Many beginners think Cloudflare is just "DNS with a dashboard."
That undersells it.
When proxied traffic is enabled, Cloudflare becomes a real network boundary in front of your infrastructure. It can:
- terminate TLS at the edge
- absorb or filter hostile traffic
- enforce challenge flows
- cache eligible content
- forward visitor traffic to your origin
- attach proxy metadata such as the real client IP
In other words, your origin is no longer the first thing every visitor talks to.
That is a big architectural improvement for custom server deployments.
Why "Flexible" Mode Is a Bad Production Default
This is the part that confuses people early on.
Cloudflare offers multiple SSL modes, but only one family of choices really makes sense for serious production use: the ones where your origin also speaks HTTPS.
The mode beginners are often tempted by is the easiest one to make "work" quickly:
browser -> HTTPS -> Cloudflare -> HTTP -> origin
It feels convenient because the browser gets HTTPS, but the origin does not.
That is not the architecture you want to build around.
Why?
- your server is still receiving plain HTTP from the proxy
- origin encryption is gone
- some redirect and cookie behavior gets more fragile
- it teaches the wrong operational habit
For a custom production server, the better model is:
browser -> HTTPS -> Cloudflare -> HTTPS -> origin
That is the safer habit to normalize from the start.
Why the Origin Should Still Have Its Own Certificate
Even if Cloudflare is already handling public HTTPS, your origin server should still be configured for TLS.
That gives you:
- encrypted proxy-to-origin traffic
- a stronger trust chain between the CDN and your server
- cleaner migration options if your edge setup changes later
- fewer surprises in security tooling and redirects
The certificate at the origin can come from a public CA or from a CDN-specific origin certificate workflow. The exact source matters less than the architecture decision: do not treat the origin as if encryption no longer matters just because a proxy exists in front of it.
Real Client IP Restoration Is a Big Deal
One of the most important implementation details is also one of the easiest to miss.
When Cloudflare proxies traffic, your origin server does not naturally see the visitor's real IP address. It sees Cloudflare's IP unless you deliberately restore the original client identity from the headers Cloudflare sends.
That matters because the real IP affects:
- rate limiting
- audit logs
- abuse detection
- login protection
- geo decisions
- incident response
If you skip this step, your Nginx server can end up thinking all traffic came from the proxy network. That breaks security controls in quiet, annoying ways.
The correct pattern is:
- trust only the proxy IP ranges you explicitly expect
- read the client identity from the proxy header
- do not trust that header from arbitrary internet clients
That last point matters a lot. A header is not proof by itself. It only becomes trustworthy when it came from a trusted proxy.
Why Direct-to-Origin Bypass Is a Real Risk
This is one of the easiest mistakes to make in a CDN-backed deployment.
If Cloudflare sits in front of your origin, but the origin is still broadly reachable from the public internet, then some attackers can try to skip the CDN entirely and talk to the server directly.
That matters because bypass traffic can avoid some of the protections you expected from the edge layer.
In plain language:
- Cloudflare can only protect the traffic that actually passes through Cloudflare
- if the origin is directly reachable, some traffic may try to go around the front desk
This is why origin protection matters.
The exact implementation depends on the hosting environment, but the general goal is always the same:
- only accept public web traffic through the trusted proxy path
- reduce direct exposure of the origin as much as possible
- do not treat "hidden IP" as the same thing as real origin protection
That last point is important. Obscurity helps a little. Explicit origin controls help a lot more.
How App Frameworks Behave Behind a Trusted Proxy
The proxy layer does not just affect Nginx. It also changes how your application sees the world.
If a framework like Express is running behind one or more proxies, it needs to understand that the request was forwarded. Otherwise the app can make bad assumptions about:
- whether the request was secure
- what the real client IP is
- which host the user requested
- whether secure cookies should be issued
This can create messy bugs that do not immediately look like proxy bugs at all.
For example:
- login rate limits may seem inconsistent
- secure cookies may not behave correctly
- redirects may point to the wrong scheme
- request logs may become much less useful
The fix is not complicated, but it is deliberate: the application must be told which proxies it should trust, and only those proxies.
This is another reason layered infrastructure needs layered configuration. Nginx and Cloudflare can be correct, and the app can still behave incorrectly if it was never taught that it lives behind a proxy chain.
What Nginx Should Do Behind Cloudflare
When Cloudflare sits in front of Nginx, the origin server should do more than just "accept traffic."
A healthy setup usually includes:
- HTTP to HTTPS redirects
- modern TLS protocol settings
- strong security headers
- request body size limits
- proxy-aware real IP restoration
- rate limiting
- upstream routing to the correct app service
That means Cloudflare is not your only security layer. It is the first one.
Nginx is still responsible for being a disciplined origin.
Caching and CDN Expectations
Beginners sometimes assume that once Cloudflare is in front, everything is automatically cached and fast.
That is not how it works.
Some content should be cached aggressively. Some should not be cached at all. Dynamic API responses, authenticated pages, and personalized content all need careful handling.
The useful rule of thumb is:
- static assets often benefit from CDN caching
- authenticated or user-specific responses usually should not be cached blindly
- API behavior should be explicit, not accidental
Cloudflare is powerful, but it is not a substitute for understanding your cache model.
If a deployment mixes up public cacheable content with private dynamic content, the problem is not "Cloudflare got weird." The cache policy was never clearly defined in the first place.
Why This Layering Works So Well
This is where the architecture gets practical.
Cloudflare is good at edge-scale concerns:
- bot pressure
- broad DDoS patterns
- challenge pages
- public TLS termination
Nginx is good at origin-specific concerns:
- routing requests to the right local service
- setting headers for your own application
- enforcing local body limits
- origin-side rate limiting
- preserving forwarding metadata
One layer is not a replacement for the other.
They solve different parts of the problem.
Staging and Private Environments Need Different Rules
A lot of teams protect production and then forget that QA, staging, or preview systems are often easier targets.
That is backwards.
Non-production environments often contain:
- unfinished features
- less polished auth flows
- weaker data hygiene
- helpful debugging behavior you would never want exposed publicly
That is why a good deployment strategy often adds extra gates in front of non-production systems, such as:
- identity-based access controls
- basic auth
- network allowlists
- proxy-only exposure
The lesson is simple: if an environment is not meant for the public, do not treat it like a public website.
A Generic Nginx Example
This is a generic example of the kind of origin config you want when a proxy network sits in front of your server:
# Trust only known proxy IP ranges
set_real_ip_from 203.0.113.0/24;
set_real_ip_from 2001:db8::/32;
real_ip_header CF-Connecting-IP;
limit_req_zone $binary_remote_addr zone=login:10m rate=10r/m;
limit_req_zone $binary_remote_addr zone=general:10m rate=60r/m;
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com www.example.com;
server_tokens off;
ssl_certificate /etc/ssl/example/fullchain.pem;
ssl_certificate_key /etc/ssl/example/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "DENY" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
client_max_body_size 2m;
location / {
limit_req zone=general burst=20 nodelay;
proxy_pass http://frontend:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /api/ {
limit_req zone=login burst=5 nodelay;
proxy_pass http://api:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
This example is intentionally generic, but it teaches the important production ideas:
- trust the proxy explicitly
- restore the real client IP correctly
- keep HTTPS at the origin
- redirect plain HTTP upward to HTTPS
- enforce origin-side policy even when a CDN is in front
A Good Beginner Mental Model
If you are new to this, use this picture:
Cloudflare is the security desk in the lobby. Nginx is the locked office door upstairs. Your application is the room behind that door.
Even if the lobby already checked the visitor, you still want the office door to lock, log entry, and decide where the visitor is allowed to go.
That is what layered security looks like in web infrastructure.
What This Architecture Solves
This setup is good at:
- reducing direct origin exposure
- keeping public TLS intact across both hops
- improving abuse resistance
- making rate limits more accurate with real client IP restoration
- separating edge concerns from origin concerns
- protecting non-production environments more intentionally
It is also operationally clean. Each layer has a job.
What It Does Not Solve
This pattern is useful, but it does not make application bugs disappear.
It does not replace:
- secure application code
- authentication and authorization
- input validation
- secret management
- dependency updates
- good logging and alerting
- database security
It also does not mean you should publish every origin port just because Cloudflare is in front. Good network boundaries still matter.
Implementation Advice for Beginners
If you are deploying your own React frontend, Express API, or any similar full-stack app on a VM or custom server, a strong baseline looks like this:
- Put Cloudflare in front of the domain.
- Use a mode where Cloudflare talks to the origin over HTTPS.
- Configure Nginx with its own certificate and HTTPS listener.
- Redirect all plain HTTP traffic to HTTPS.
- Trust only the CDN proxy IP ranges for real client IP restoration.
- Use the proxy-provided client IP header only after that trust is in place.
- Keep your app service behind Nginx instead of exposing it directly.
- Add security headers, body limits, and rate limiting at the origin.
- Gate staging and QA harder than production if they are not meant to be public.
If you follow those steps, you will already be ahead of many custom deployments on the internet.
Final Thought
The biggest mistake beginners make with HTTPS is thinking the story ends once the browser shows a lock icon.
That icon only tells you part of the story.
Good production infrastructure asks a better question:
How far does encryption go, how many layers inspect the traffic, and how well is the origin protected when the public internet gets noisy?
Cloudflare in front of Nginx, with TLS on both hops, is a strong answer to that question for custom server deployments.