There was interesting discussion during Day 1 at the Linux Foundation Collaboration Summit about virtualization and cloud computing as it relates to Linux. Former Red Hat executive and rPath Founder and CTO Erik Troan took a few minutes to share his perspective on how Linux is supporting virtual computing environments and cloud computing initiatives.
rPath recently joined The Linux Foundation. Can you tell us about how you’re using Linux today to advance your business?
Troan: Linux has always been an integral part of rPath’s focus as a company. We provide automation solutions to help deploy and manage large numbers of Linux systems, and we’re being used on deployments in the range of 15 or 20,000 Linux servers. The success Linux has had in scale-out infrastructures has created new management costs that we work to reduce.
You work with system administrators every day. How are they using Linux to support new cloud computing initiatives at their companies?
Troan: Linux is very popular in cloud initiatives. It functions very well in a
headless environment, and the licensing model means that customers don’t
have to count how many machines are running. Commercial licenses can be
very hard to use in a dynamic cloud environment. If you have twenty
machines running for three days, five hundred for twenty minutes, and then
you have ten running for the rest of the month, how many Windows licenses
do you need? How do you count those to make sure you stay within your
license bounds? These questions are hard for commercial vendors to answer,
and ”just use Linux” has become a very simple and fast way to get cloud
projects up-and-running.
Your company has said that it sees increasing Linux deployments to support virtual computing environments? What’s driving this?
Troan: Virtual environments are about two things: Cost and management.
The first phase of virtualization was all about reducing the number of physical boxes a company had to purchase — in other words, server consolidation. Not only did this mean buying fewer machines, it dramatically reduced expenditures
on related expenses like floor space, power and cooling. The cumulative savings were so great that virtualization delivered a positive return very, very quickly.
Once consolidation was underway, the management benefits of virtualization
became apparent. Running images can move off of a piece of hardware,
allowing zero downtime maintenance and replacement. Systems can be
snapshot or suspended, freeing up RAM while preserving the systems. These
features are a little harder to benefit from on day one, but they are an
even larger financial benefit in the medium term.
Linux has been part of this in a couple of ways. First of all, significant
numbers of compute workloads in enterprises run on Linux, and those workloads are being moved into virtual environments rapidly. Second, Linux itself is being used as a virtualization platform. Amazon uses Xen for EC2, which
is by far the largest virtualized infrastructure in existence, and interest
in KVM has started to move into the prototyping phase. Linux’s ability to deliver stable virtualization at a low price point is very interesting for corporate users.
It’s also worth mentioning that the licensing challenges commercial software
has in cloud environments also apply in virtual environments. It can be hard to know how much of a product is running, and nobody likes to count things.
With budgets and headcount down, how are administrators continuing to scale system counts?
Troan: The answer, very simply, is automation. In order to handle scale you have
to automate everything you possibly can. Mark Burgess, who developed cfengine, said in Login magazine, “Always let your tool do the work.” The only way we can cope with complexity is by having tools do the heavy lifting. In
deployment, provisioning, and configuration, automated tools are how you manage more and more boxes without adding people. Large organizations
like Google and Yahoo! have been doing this for years using home grown tools.
Now that even midsized companies have thousands of servers, we’re seeing a lot
of interest in off-the-shelf solutions for automation across the server lifecycle.
Automation not only reduces the time it takes to deploy and manage servers; it
also reduces the errors that occur when things are being done by hand. Putting
systems into place that do things the right way every time is the only way to grow server counts.
- Dent Introduces Industry’s First End-to-End Networking Stack Designed for the Modern Distributed Enterprise Edge and Powered by Linux - 2020-12-17
- Open Mainframe Project Welcomes New Project Tessia, HCL Technologies and Red Hat to its Ecosystem - 2020-12-17
- New Open Source Contributor Report from Linux Foundation and Harvard Identifies Motivations and Opportunities for Improving Software Security - 2020-12-08