Ever wondered how the K8s scheduler works, and how can you “help” it make the right decision for your application? In this session, we'll cover several different scheduling use-cases in K8s, what scheduling techniques are required in each and when to use them.
All woֿֿֿֿ\men are created equal.
But when it comes to applications, this is not the case :-)
K8s has the ability to run different types of applications on a set of different infrastructure types. This gives us better utilization of our infrastructure, but it also brings challenges related to the types of infrastructure required for a specific application, and the dependencies between the different applications.
Some applications require higher bandwidth, some need a newer generation CPU family, some need GPU, etc.
Applications can also have "location" restrictions such as: must have at least 1 replica in AZ, must not run with other applications on the same node, etc.
You’ve got the idea. Although K8s is a very powerful system that manages our application deployment/configuration/lifecycle, it’s OUR responsibility to “tell” K8s the constraints and limitations of our application, scheduling-wise.
In this session, we’ll cover the different scheduling techniques in K8s and their
use cases, including:
Dependent applications - applications that must have other applications on the same node
Conflicting applications - applications that must not share the same node with other specific applications
Applications that need specific instance types (e.g. instances with a high bandwidth network)
Availability Zone restrictions
K8s Pod Scheduling - Deep Dive
Tsahi Duek is a solutions architect with vast experience in designing and architecting production environments.
He likes everything about technology, from the bare metal infrastructure to designing and developing applications.
He works at Spotinst which provides multi-cloud workload automation while allowing its users to significantly reduce their cloud costs. Excited about new technologies, especially K8s.