• TechOps Examples
  • Posts
  • How DNS Routing Works in Amazon Route 53 and How to Configure It

How DNS Routing Works in Amazon Route 53 and How to Configure It

In partnership with

TechOps Examples

Hey — It's Govardhana MK 👋

Welcome to another technical edition.

Every Tuesday – You’ll receive a free edition with a byte-size use case, remote job opportunities, top news, tools, and articles.

Every Thursday and Saturday – You’ll receive a special edition with a deep dive use case, remote job opportunities, and articles.

👋 👋 A big thank you to today's sponsor LINDY AI

Build smarter, not harder: meet Lindy

If ChatGPT could actually do the work, not just talk about it, you'd have Lindy.

Just describe what you need in plain English. Lindy builds the agent and gets it done—no coding, no complexity.

Tell Lindy to:

  • Create a booking platform for your business

  • Handle inbound leads and follow-ups

  • Send weekly performance recaps to your team

From sales and support to ops, Lindy's AI employees run 24/7 so you can focus on growth, not grunt work.

Save hours. Automate tasks. Scale your business.

👀 Remote Jobs

📚️ Resources

Looking to promote your company, product, service, or event to 58,000+ Cloud Native Professionals? Let's work together. Advertise With Us

🧠 DEEP DIVE USE CASE

How DNS Routing Works in Amazon Route 53 and How to Configure It

Before a deep dive on today’s context with architecture diagram, let’s first touch base on the basics of deployment strategies out there.

1. Rolling Deployment

This strategy replaces application instances in a phased manner. A subset of instances is updated from the old version (V1) to the new one (V2) while the rest continue serving users. It continues this in batches until all are upgraded.

What works well is that there’s no downtime, and resources are reused. But it can get tricky with persistent connections, long lived sessions, or database schema changes. There’s no instant rollback unless the new version is also designed to be backward compatible.

2. Canary Deployment

Only a small portion of traffic is initially routed to the new version (V2), while the majority continues to hit the current version (V1). This 95-5 or 90-10 split allows real world feedback from a limited audience without exposing everyone to potential breakage.

It’s great for controlled experiments and data backed validation. However, you must have dynamic traffic shifting capability at the load balancer level, plus monitoring hooks and alert thresholds that let you pause or promote the rollout based on early signals.

3. Blue Green Deployment

In this model, a separate environment is provisioned to host the new version. Initially, all user traffic flows to the current version. Once the new environment is tested and verified, traffic is gradually and gracefully switched to the new version using mechanisms like load balancer listener rules or DNS routing.

This enables near zero downtime and instant rollback by directing traffic back to the current version if needed. It’s ideal for cleaner rollouts, safer testing in production like conditions, and minimizes risk, though it temporarily doubles infrastructure requirements and relies on reliable traffic switch orchestration.

Having established the basics of how deployment strategies work, let’s now dive into the architecture and fine grained details of how Deployment is implemented in AWS.

🔴 Get my DevOps & Kubernetes ebooks! (free for Premium Club and Personal Tier newsletter subscribers)

Upgrade to Paid to read the rest.

Become a paying subscriber to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In.

Paid subscriptions get you:

  • • Access to archive of 250+ use cases
  • • Deep Dive use case editions (Thursdays and Saturdays)
  • • Access to Private Discord Community
  • • Invitations to monthly Zoom calls for use case discussions and industry leaders meetups
  • • Quarterly 1:1 'Ask Me Anything' power session