- TechOps Examples
- Posts
- Kubernetes Advanced Pod Scheduling Techniques
Kubernetes Advanced Pod Scheduling Techniques
TechOps Examples
Hey — It's Govardhana MK 👋
Welcome to another technical edition.
Every Tuesday – You’ll receive a free edition with a byte-size use case, remote job opportunities, top news, tools, and articles.
Every Thursday and Saturday – You’ll receive a special edition with a deep dive use case, remote job opportunities, and articles.
👋 Before we begin... a big thank you to today's sponsor AI WITH ALLIE
Stop Asking AI Questions, and Start Building Personal AI Software.
Feeling overwhelmed by AI options or stuck on basic prompts? The AI Fast Track is your 5-day roadmap to solving problems faster with next-level artificial intelligence.
This free email course cuts through the noise with practical knowledge and real-world examples delivered daily. You'll go from learning essential foundations to writing effective prompts, building powerful Artifacts, creating a personal AI assistant, and developing working software—all without coding.
Join thousands who've transformed their workflows and future-proofed their AI skills in just one week.
👀 Remote Jobs
Flashbots is hiring a Senior DevOps Engineer
Remote Location: Worldwide
Assured is hiring a Staff Cloud Security Engineer
Remote Location: Worldwide
📚️ Resources
Looking to promote your company, product, service, or event to 48,000+ Cloud Native Professionals? Let's work together. Advertise With Us
🧠 DEEP DIVE USE CASE
Kubernetes Advanced Pod Scheduling Techniques
In Kubernetes, scheduling is the process of assigning pods to nodes. By default, the scheduler chooses a node based on resource availability and requirements. But in real world environments, teams often need more control, whether to isolate workloads, spread them across zones, or co locate specific types of pods.
Before we dive into advanced pod scheduling in Kubernetes, let’s quickly set the context. Each one addresses a different type of scheduling need.
Taints and Tolerations help you prevent certain pods from running on a node unless they are explicitly allowed. Think of it as a node saying "not for everyone" and only specific pods are tolerated.
NodeSelector is a straightforward way for a pod to say "run me only on nodes with this label." It is simple matching based on exact key value labels.
Node Affinity builds on NodeSelector but allows for more flexible rules. You can specify preferred or required conditions and even match on a range of values or expressions.
Pod Affinity and Anti-Affinity are used when a pod needs to be scheduled based on the presence or absence of other pods. You can choose to place pods together or ensure they run on different nodes for availability or performance reasons.
1. Taints and Tolerations
Taints are applied to nodes using kubectl taint and tell the scheduler not to place pods on them unless those pods explicitly tolerate the taint.

Example: Applying a taint to a node
kubectl taint nodes node2 app=blue:NoSchedule
This means node2 will repel all pods unless they tolerate the app=blue taint. The effect NoSchedule ensures that pods without a matching toleration will not be scheduled on this node at all.
Taints are a node level repelling mechanism. If applied incorrectly, they can result in unschedulable pods.
The NoExecute taint needs special attention, as it evicts running pods.
Common Taint Effects
NoSchedule: Pod will not be scheduled unless it tolerates the taint.
PreferNoSchedule: Tries to avoid placing a pod, but not strictly enforced.
NoExecute: Existing pods without a toleration are evicted from the node.

Tolerations are added to pods to let them run on nodes with matching taints. They don’t guarantee placement, but they remove the restriction caused by the taint.
tolerations:
- key: "app"
value: "blue"
effect: "NoSchedule"
This tells the scheduler that the pod can tolerate the app=blue:NoSchedule taint.
As you can observe in the above illustration, Node 2 is tainted with app=blue. Only the blue-labeled pods include a matching toleration.
So, the scheduler places those pods on Node 2. All other pods, which do not tolerate the taint, are placed on Node 1.
Having established how taints and tolerations work together to control pod placement on restricted nodes, let’s now dive into the next set of techniques that guide pod scheduling based on node labels and pod relationships.
I am giving away 50% OFF on all annual plans of membership offerings for a limited time.
A membership will unlock access to read these deep dive editions on Thursdays and Saturdays.

Upgrade to Paid to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content.
Already a paying subscriber? Sign In.
Paid subscriptions get you:
- • Access to archieve of 175+ use cases
- • Deep Dive use case editions (Thursdays and Saturdays)
- • Access to Private Discord Community
- • Invitations to monthly Zoom calls for use case discussions and industry leaders meetups
- • Quarterly 1:1 'Ask Me Anything' power session