Event Sponsors

0.png
linnovate logo-Clear.png
ibm.png
zadara.png
Jfrog_Logo_CMYK-01.jpg
cloudify logo square-01.png
image001.png
opsschool.png

© 2018 Cloud Native Israel

nati-speaker_edited.jpg

Kubernetes was originally targeted for running large scale web applications.
I/O intensive workload represents a class of high-end applications such as network services, trading applications, database services that require high-speed access to hardware resources and often users specific hardware or CPU features to maximize their performance.


Kubernetes on the other hand abstract that application from the hardware and often assumes that resources can be moved or scale over time to maximize the utilization between many applications that share the same hardware resources. Therefore running I/O intensive workload on Kubernetes has been a challenge.


Intel added hardware acceleration drivers to Kubernetes that enable setting up of multi-nic, CPU pining as well as encryption offloading. Those extensions paved the way to enable I/O intensive workload on Kubernetes.
In this session, we will demonstrate how to run Kuberntes that is optimized for I/O intensive workloads. In addition to that, we will use placement policy to allow the deployment of applications across multi-site Kubernetes deployment on the private and public clouds. 

Nati Shalom

Running I/O intensive workloads on Kubernetes

Bio

Nati is the Founder and CTO of Cloudify and a thought leader in fields ranging from Cloud Computing to Big Data Technologies.

‚Äč

Shalom was recognized as a Top Cloud Computing Blogger for CIO’s by The CIO Magazine and his blog listed as an excellent blog by *technical founders* by YCombinator. He is also the founder of IGTCloud, and is a frequent presenter at industry conferences around the world.

11:30-12:00 | Track 2

  • White Twitter Icon
  • White Facebook Icon
  • White LinkedIn Icon
  • White Google+ Icon
  • White YouTube Icon