TechOps Examples
Hey — It's Govardhana MK 👋
Welcome to another technical edition.
Every Tuesday – You’ll receive a free edition with a byte-size use case, remote job opportunities, top news, tools, and articles.
Every Thursday and Saturday – You’ll receive a special edition with a deep dive use case, remote job opportunities, and articles.
👋 👋 A big thank you to today's sponsor MINTLIFY
Ship Docs Your Team Is Actually Proud Of
Mintlify helps you create fast, beautiful docs that developers actually enjoy using. Write in markdown, sync with your repo, and deploy in minutes. Built-in components handle search, navigation, API references, and interactive examples out of the box, so you can focus on clear content instead of custom infrastructure.
Automatic versioning, analytics, and AI powered search make it easy to scale as your product grows. Your docs stay accurate automatically with AI-powered workflows with every pull request.
Whether you're a dev, technical writer, part of devrel, and beyond, Mintlify fits into the way you already work and helps your documentation keep pace with your product.
👀 Remote Jobs
Trafilea is hiring a Cloud Engineer (DevOps)
Remote Location: Worldwide
Supabase is hiring a Deployment Engineer
Remote Location: Worldwide
Powered by: Jobsurface.com
📚 Resources
Looking to promote your company, product, service, or event to 55,000+ Cloud Native Professionals? Let's work together. Advertise With Us
🧠 DEEP DIVE USE CASE
Scaling on AWS for the First 1 Million Users
Every successful product hits the same wall. What worked for your first hundred users starts creaking at a thousand, breaks under ten thousand, and completely falls apart at a hundred thousand.
We walk through the exact architectural evolution from a single server to a system that handles one million users, with the AWS services that make each transition possible and the reasoning behind each decision.
Stage 1: One Server, One Database, One Team
Every product starts here. A single EC2 instance running your application and your database on the same machine. It is the fastest path to production and the right choice when you have no users yet.

Single point of failure. No redundancy.
The problems with this setup are well known. The application and database compete for CPU, memory, and disk I/O on the same machine. If the instance goes down, everything goes down. You cannot scale the application tier independently from the database tier. And a traffic spike that exhausts the instance's resources takes down both your app and your data.
The first thing to fix is not adding more servers. It is separating concerns.
Stage 2: Separate the Database
Before you add a second server, move the database off the application instance. Amazon RDS gives you a managed relational database with automated backups, Multi-AZ failover, and read replicas, none of which you would build reliably yourself.

App and DB scale independently now.
With RDS Multi-AZ enabled, AWS maintains a synchronous standby replica in a different Availability Zone. If the primary fails, RDS automatically promotes the standby and updates the DNS endpoint. Your application reconnects without any manual intervention, typically within 60 to 120 seconds.
Read replicas offload read heavy workloads. If your application reads far more than it writes, pointing read queries at a replica reduces load on the primary. This is the most cost effective scaling move available before you introduce any caching layer.
Stage 3: Load Balancing and Horizontal Scaling
When a single EC2 instance is no longer enough, the answer is not a bigger instance. It is more instances behind a load balancer. This is the transition from vertical scaling to horizontal scaling, and it is the most important architectural shift in this entire journey.

Stateless app tier. ALB routes across AZs.
The Application Load Balancer (ALB) distributes incoming requests across all healthy instances in the Auto Scaling group. It performs health checks and stops routing to instances that fail them. It supports path-based and host-based routing, which lets you split traffic between microservices without multiple load balancers.
The Auto Scaling group maintains a minimum number of instances for baseline availability and scales out when CPU or request count exceeds your defined thresholds. Scale-in removes instances during low traffic periods to reduce cost. The group spans multiple Availability Zones, so a single AZ failure does not take down your application.
Your application must be stateless for this to work. User sessions, file uploads, and any in-memory state that needs to persist across requests must live outside the EC2 instance. Sessions go into ElastiCache. File uploads go into S3. Your instances become disposable.
🔴 Get my DevOps & Kubernetes ebooks! (free for Premium Club and Personal Tier newsletter subscribers)
Upgrade to Paid to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content.
UpgradePaid subscriptions get you:
- Access to archive of 250+ use cases
- Deep Dive use case editions (Thursdays and Saturdays)
- Access to Private Discord Community
- Invitations to monthly Zoom calls for use case discussions and industry leaders meetups
- Quarterly 1:1 'Ask Me Anything' power session



