Kubernetes was originally targeted for running large scale web applications.
I/O intensive workload represents a class of high-end applications such as network services, trading applications, database services that require high-speed access to hardware resources and often users specific hardware or CPU features to maximize their performance.
Kubernetes on the other hand abstract that application from the hardware and often assumes that resources can be moved or scale over time to maximize the utilization between many applications that share the same hardware resources. Therefore running I/O intensive workload on Kubernetes has been a challenge.
Intel added hardware acceleration drivers to Kubernetes that enable setting up of multi-nic, CPU pining as well as encryption offloading. Those extensions paved the way to enable I/O intensive workload on Kubernetes.
In this session, we will demonstrate how to run Kuberntes that is optimized for I/O intensive workloads. In addition to that, we will use placement policy to allow the deployment of applications across multi-site Kubernetes deployment on the private and public clouds.
Running I/O intensive workloads on Kubernetes
Nati is the Founder and CTO of Cloudify and a thought leader in fields ranging from Cloud Computing to Big Data Technologies.
Shalom was recognized as a Top Cloud Computing Blogger for CIO’s by The CIO Magazine and his blog listed as an excellent blog by *technical founders* by YCombinator. He is also the founder of IGTCloud, and is a frequent presenter at industry conferences around the world.