- TechOps Examples
- Posts
- Kubernetes Service Discovery Patterns
Kubernetes Service Discovery Patterns
TechOps Examples
Hey — It's Govardhana MK 👋
Welcome to another technical edition.
Every Tuesday – You’ll receive a free edition with a byte-size use case, remote job opportunities, top news, tools, and articles.
Every Thursday and Saturday – You’ll receive a special edition with a deep dive use case, remote job opportunities, and articles.
👋 Event-Driven and Predictive Autoscaling Playbook for Kubernetes By Zbynek Roubalik, Founder & CTO, Kedify and Founding Maintainer of KEDA
We Kubernetes practitioners always struggle with autoscaling that reacts too late.
Traditional autoscaling kicks in after workloads spike, wasting resources and missing real intent.
Kedify’s new Kubernetes AutoScaling free eBook “From Intent to Impact” is your complete playbook to move from reactive to predictive autoscaling.
Inside, you’ll learn how to:
How KEDA evolved to enable event-driven autoscaling with 70+ scalers
Why CPU-based scaling fails today’s latency and cost goals
How to scale HTTP, queue, and GPU workloads with intent-aware scalers
Principles of predictive autoscaling that forecast load within budget
GPU-aware scaling for AI and ML using VRAM and SM metrics
How Kedify unifies scaling, cost, and observability in one place
Real-world wins, readiness checklists, and two plug-and-play proofs you can deploy today
Download your free copy – limited time available.
👀 Remote Jobs
Fortytwo is hiring a Senior MLOps Engineer
Remote Location: Worldwide
Canonical is hiring a Senior Site Reliability Engineer
Remote Location: Worldwide
📚️ Resources
Looking to promote your company, product, service, or event to 55,000+ Cloud Native Professionals? Let's work together. Advertise With Us
🧠 DEEP DIVE USE CASE
Kubernetes Service Discovery Patterns
In Kubernetes, service discovery is how workloads find and communicate with each other, inside the cluster or from outside clients. Since Pods are ephemeral and IPs change constantly, Kubernetes abstracts communication through Services, DNS, and routing layers.
I’ve seen engineers debug network issues for hours because a ClusterIP was used where a NodePort was needed, or because an Ingress was assumed to work without a backend service mapping. These gaps usually come from not understanding how Kubernetes discovers and routes services at different layers.
Let us put a full stop to that knowledge gap.
How does Internal Service Discovery work?
Internal service discovery in Kubernetes enables Pods within the same cluster to communicate using stable DNS names instead of dynamic Pod IPs.

When a Service is created, Kubernetes assigns it a virtual IP (ClusterIP) and registers it with CoreDNS for name resolution.
Each time a Pod calls another service, the DNS name resolves to this virtual IP. The Service then forwards the request to one of the Pods that match its selector. This creates a consistent routing layer that hides Pod restarts or rescheduling across nodes.
In this setup, applications interact with other services using simple hostnames rather than managing IPs manually. The Service object maintains the endpoint list and handles load balancing automatically within the cluster.
Works well for most internal communications but comes with a few trade offs:
DNS resolution adds minor latency at scale
Misconfigured selectors can cause traffic drops
ClusterIP Services are not reachable from outside the cluster
High churn in Pods can increase DNS update frequency
Internal discovery simplifies communication, but stable connectivity still depends on healthy DNS and correctly defined Service selectors.
How does External Service Discovery work?
Applications inside Kubernetes often need to talk to resources that are not part of the cluster, such as managed databases, third-party APIs, or external services maintained by other teams. External service discovery handles this outbound communication.

When a Pod makes an outbound request, Kubernetes can map a DNS name to an external IP or hostname using ExternalName Services. These act as lightweight aliases, redirecting traffic to the actual external system.
For example, a service named payments
can point to payments.example.com
, and applications inside the cluster can connect using http://payments
.
For direct connections, Pods can also reach external IPs through standard egress routes or dedicated gateways. In production environments, network policies or egress controllers are often applied to control outbound access and enforce security rules.
This model is essential for hybrid setups, but it introduces specific considerations:
External systems may not be part of Kubernetes DNS
DNS resolution depends on the external domain’s availability
Latency and network policies can affect connection reliability
Security must be handled through TLS or service mesh policies
Always verify how your cluster connects to external systems. Small network or DNS misconfigurations often surface as service discovery failures.
With this basic understanding, let us now move into the real world service discovery patterns used in most production environments.
NodePort Service Discovery
A NodePort Service in Kubernetes allocates a static TCP or UDP port on every node in the cluster, usually within the range 30000 to 32767. This port acts as an external entry point that forwards traffic to the Pods backing the Service.
When a NodePort Service is created, kube-proxy on each node configures iptables or IPVS rules to listen on that port. These rules intercept incoming packets and forward them to one of the Pods associated with the Service. Load balancing between Pods is handled by kube-proxy using round-robin or IPVS algorithms.

TOGETHER WITH THE CODE
Tech moves fast, but you're still playing catch-up?
That's exactly why 100K+ engineers working at Google, Meta, and Apple read The Code twice a week.
Here's what you get:
Curated tech news that shapes your career - Filtered from thousands of sources so you know what's coming 6 months early.
Practical resources you can use immediately - Real tutorials and tools that solve actual engineering problems.
Research papers and insights decoded - We break down complex tech so you understand what matters.
All delivered twice a week in just 2 short emails.

Upgrade to Paid to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content.
Already a paying subscriber? Sign In.
Paid subscriptions get you:
- • Access to archive of 200+ use cases
- • Deep Dive use case editions (Thursdays and Saturdays)
- • Access to Private Discord Community
- • Invitations to monthly Zoom calls for use case discussions and industry leaders meetups
- • Quarterly 1:1 'Ask Me Anything' power session