Streamlining HPC Workloads with Containers

Dustin Kirkland - Streamlining HPC Workloads with Containers One might find it ironic that some of the world's fastest supercomputers -- vast clusters capable of trillions of floating point operations per second -- can take upwards of a half an hour to reboot in between jobs. While we often talk about the density advantages of containers, it's the opposite approach that we use in the High Performance Computing world! Here, we use exactly 1 system container per node, giving it unlimited access to all of the host's CPU, Memory, Disk, IO, and Network. And yet we can still leverage the management characteristics of containers -- security, snapshots, live migration, and instant deployment to recycle each node in between jobs. In this talk, we'll examine a reference architecture and some best practices around containers in HPC environments. Dustin Kirkland is Cloud Product Manager at Canonical, part of Canonical's Ubuntu Product and Strategy team. Dustin is responsible for the technical strategy, road map, and life cycle of the Ubuntu Cloud and IoT commercial offerings. (Canonical)
Length: 24:54
Views 391 Likes: 6
Recorded on 2016-09-09 at Container Camp UK
Look for other videos at Container Camp UK.
Tweet this video