TechOps Examples
Hey — It's Govardhana MK 👋
Welcome to another technical edition.
Every Tuesday – You’ll receive a free edition with a byte-size use case, remote job opportunities, top news, tools, and articles.
Every Thursday and Saturday – You’ll receive a special edition with a deep dive use case, remote job opportunities, and articles.
👋 Before we begin... a big thank you to today's sponsor GRANOLA
The AI notepad for back-to-back meetings
Most AI note-takers just record your call and send a summary after.
Granola is different. It’s an AI notepad. You jot down what matters during the meeting, and Granola transcribes everything in the background.
When the call ends, it combines your notes with the full transcript to create summaries, action items, and next steps, all from your point of view.
Then the powerful part: chat with your notes. Write follow-up emails, pull out decisions, or prep for your next call, in seconds.
Think of it as a super-smart notes app that actually understands your meetings.
👀 Remote Jobs
Avora is hiring a Cloud and Compliance Architect
Remote Location: Worldwide
Backpack is hiring a Infrastructure (Rust, Devops) Engineer
Remote Location: Worldwide
Powered by: Jobsurface.com
📚 Resources
Looking to promote your company, product, service, or event to 56,000+ Cloud Native Professionals? Let's work together. Advertise With Us
🧠 DEEP DIVE USE CASE
A Comprehensive Overview of Argo CD Architectures
Every Kubernetes team eventually faces the same problem. The cluster works. The application deploys. But nobody can answer the question: what is actually running in production right now, and does it match what is in Git?
Argo CD solves this with a deceptively simple principle. Git is the source of truth. The cluster should always reflect what is in Git. Any deviation is detected, surfaced, and can be automatically corrected.
But Argo CD is not a single deployment pattern. It has a rich architecture that ranges from a simple single cluster installation to a sophisticated multi cluster control plane spanning dozens of environments. Understanding the full architecture is what allows you to design a deployment model that fits your organization rather than fighting the constraints of a pattern you adopted without fully understanding.
Argo CD Core Components
Before examining deployment architectures, you need a clear mental model of what runs inside an Argo CD installation and what each component does.

The API Server is the entry point for all human and machine interaction with Argo CD. It exposes a gRPC API consumed by the CLI and a REST/HTTP API consumed by the web UI and external automation. It handles authentication (delegating to Dex or an external OIDC provider), enforces RBAC policies, and processes incoming webhooks from Git providers to trigger syncs.
The Repository Server is a standalone service that handles all Git operations. It clones repositories, caches them locally, and renders final Kubernetes manifests from whatever tool the application uses: plain YAML directories, Helm charts, Kustomize overlays, or Jsonnet. The rendered manifests are what the Application Controller compares against live cluster state. Keeping this in a separate process means Git credential management and template rendering are isolated from the reconciliation logic.
The Application Controller is the heart of Argo CD. It runs a continuous reconciliation loop, comparing the desired state (rendered manifests from the Repo Server) against the live state (current resources in the target Kubernetes cluster). When it detects a difference (OutOfSync), it can either alert a human or automatically apply the changes, depending on your sync policy configuration. It communicates with target clusters using their Kubernetes API credentials stored as Secrets in the argocd namespace.
Redis caches application state and cluster information to reduce the load on both the API server and the target cluster's Kubernetes API. Without caching, every UI page load and every reconciliation cycle would generate a flood of Kubernetes API requests.
The ApplicationSet controller enables programmatic generation of Argo CD Application resources from generators: a list of clusters, a directory structure in a Git repository, a matrix combining multiple generators, or a pull request list. This is what enables one ApplicationSet to manage the same application across fifty clusters without fifty manually created Application resources.
Architecture 1: Standalone Single Cluster
The simplest deployment. Argo CD runs in the same cluster it manages. This is the right starting point for small teams, single environment setups, and organizations just beginning their GitOps journey.

Argo CD manages itself. Simple setup. No network complexity. Single point of failure.
In the standalone model, Argo CD is installed into its own namespace (argocd) inside the cluster it manages. The Application Controller watches the same cluster's Kubernetes API using in-cluster credentials. When a developer pushes to Git, a webhook triggers the Repo Server to re-render manifests, the controller detects the diff, and the sync applies the changes.
The appeal is simplicity. No external network connectivity requirements, no cross cluster credential management, and Argo CD itself can be managed as an Argo CD application (the "app of apps" pattern), creating a self managing GitOps loop.
The limitation is that if the cluster goes down, Argo CD goes down with it. For single-cluster setups this is acceptable: if the cluster is down, the CD system being unavailable is a secondary concern. But as soon as you introduce multiple clusters or need CD to be available even when an application cluster is degraded, you need the hub spoke model.
Architecture 2: Hub Spoke Multi Cluster
The hub spoke pattern separates the Argo CD control plane from the application workload clusters. A dedicated management cluster runs Argo CD and manages N application clusters as remote targets.

One Argo CD manages N clusters. Control plane isolated from workloads.
In the hub spoke model, the management cluster runs Argo CD exclusively. No application workloads run on it. Each spoke cluster is registered with Argo CD by storing its kubeconfig as a Secret in the argocd namespace. The Application Controller in the management cluster connects outbound to each spoke cluster's Kubernetes API server to apply manifests and monitor resource health.
This model gives you several operational advantages. The Argo CD availability is decoupled from any individual application cluster failure. A degraded production cluster does not prevent you from reading sync status, creating new Applications, or managing other clusters. The management cluster can be hardened with stricter access controls since it holds credentials to all other clusters.
The networking requirement is the key constraint: the management cluster must have outbound network access to every spoke cluster's API endpoint. In practice this means either public API endpoints with IP allowlisting, VPN or private peering between the management cluster VPC and each spoke cluster VPC, or a network architecture that places all clusters in a hub VNet with private connectivity.
Spoke clusters do not need any inbound access to the management cluster. All connections are initiated outbound from the hub. This simplifies spoke cluster firewall rules significantly.
🔴 Get my DevOps & Kubernetes ebooks! (free for Premium Club and Personal Tier newsletter subscribers)
Upgrade to Paid to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content.
UpgradePaid subscriptions get you:
- Access to archive of 250+ use cases
- Deep Dive use case editions (Thursdays and Saturdays)
- Access to Private Discord Community
- Invitations to monthly Zoom calls for use case discussions and industry leaders meetups
- Quarterly 1:1 'Ask Me Anything' power session



