September 3, 2009, 1:40 pm
The big news coming out of Red Hat for this year’s Red Hat Summit is, at first blush, a little anti-climatic. The release of Red Hat Enterprise Linux 5.4 seems like just another point release for an already stable and successful operating system. A few more bells and whistles, but not much to get users carried away.
But dig a little deeper into the technology coming out in this release, and the strategy behind it, and you will find a narrative that describe how open source will be a leader in the growing cloud arena. The RHEL 5.4 announcement is just the beginning of the story.
There’s a lot of new improvements in RHEL 5.4: Systemtap, Generic Receive Offload, and a priview implementation of the malloc memory allocation library, to name a few. But the one Red Hat’s highlighted in all of their talk about the release is the inclusion of the KVM virtualization toolset. On the surface, this seems like hype. After all, RHEL has had virtualization in the form of Xen since the release of RHEL 5. What’s the big deal?
First, this release is the first in a series of releases scheduled to come out in 2009 that will center on virtualization, the key being the upcoming Red Hat Enterprise Virtualization (RHEV) Hypervisor, which will have the RHEL 5.4 codebase underneath. The other offerings, RHEV Manager for Desktops with fully integrated VDI management and RHEV Manager for Servers, will round out Red Hat’s virtual offerings, and introduce a way for customers to pick and choose how they want to virtualize and manage their application stacks.
It’s important to pause here and answer a question you might be thinking: if Red Hat’s so gung-ho on virtulaization, why didn’t they just wait and release RHEV at the same time as RHEL 5.4? In fact, this was a question at Tuesday’s press conference with Red Hat executives. Here’s how Brian Stevens, Red Hat’s CTO, explained it: releasing RHEL 5.4 now gives customers who are interested in developing apps for RHEV a head start for that development. Having identical codebases in the two products makes that easy.
If this were just about Red Hat releasing a whole bunch of virtual platforms in the near future, the story might end there, leaving us with the vaguely satsified/stunned feeling one might have after seeing the latest Hollywood summer blockbuster.
But here’s the second part of the story: with some existing technologies and a little bit of new tech thrown in, Red Hat is hoping to help cloud customers easily migrate their virtual machines to any cloud–public, private, or anything in-between. I spoke with Stevens prior to his Wednesday keynote, and learned a little bit more about how this aspect of RHEL 5.4 and its upcoming descendants–which Red Hat is calling the “hybrid cloud”–will work.
Earlier, in the press conference, I asked a question about application development between the Xen and the KVM virtual platforms. The application layer is transparent, Stevens explained to me, so developing for one is no different than developing for the other. But, he added, there are still important differences between virtual machines, despite the advent of the Open Virtualization Format (OVF) standard that is supposed to remove compatibility obstacles.
According to Stevens, while the OVF specification provides a format for packaging virtual machines and applications for deployment across different virtualization platforms, he likens the OVF to an overnight envelope sent to a recipient. Each envelope from the service will look the same on the outside, but inside that contents can be far different. Stevens is skeptical that the OVF offers a real solution for virtual image compatibility.
Since RHEL 5.4 has both KVM and Xen, RHEL customers will have this compatibility problem if they try to switch images from one virtual tool to the other. The approach Red Hat is taking to solve this is making use of the libguestfs tool, which is part of the hybrid cloud solution. libguestfs will make it possible to convert a guest OS to another flavor, such as KVM to ESX or Xen to KVM.
Moreover, this conversion can be done on-disk, meaning you don’t have to open the guest OS and run it to make any configuration changes. Conversion is done on the fly and, Stevens added, you can also cat configuration changes within the guest OS–again, without running the guest OS.
Another part of the hybrid clould solution is a grid-level management tool known as MRG Grid, which Red Hat is developing in partnership with the University of Wisconsin-Madison in the Condor Project. In a video demonstration of the MRG (Messaging, Realtime, Grid) Grid tool, the physical resources of a cluster of machines are easily managed by setting the priority of any job using the cluster.
In the demo, four animated movies share resources for rendering (likely a nod to Red Hat’s marquee customer, Dreamworks Animation), but when one movie needs more rendering time, the Condor tool lets the manager dynamically set the percentage of resources needed, automatically balancing the load with the other jobs. Upon applying the new priorities, the resources the rush job needs are immediately released from the other three movies, and the express job seamlessly rolls in to take over the resources until the task is completed (similar to the screenshot shown at right).
This dynamic resource management will allow cloud users to allocate resources on any set of resources to which they have access: even public clouds if the access rights and configuration are already set up.
Another part of the hybrid cloud, Deltacloud, will make such sharing very easy. Currently, all clouds, public or private, have their own APIs and infrastructures: Amazon EC2, Rackspace, VMWare… so the same VM image that runs on one of these services cannot immediately run on another service, because there are too many differences between the tools and options the clouds run.
This is all part of the plan, Stevens indicated, since cloud providers are scrambling now to find a way to lock customers in to their cloud services. It’s the typical frontier-technology scenario: new territory opens up, and vendors make landgrabs to try to keep customers for as long as they can. Stevens was quick to point out that while VMware announced their cloud initiative, vCloud Express, this week at VMworld, he has yet to find any code, nor any community of cloud developers. Others have pointed out that VMware’s promises of workload portability only work if both customer and service provider use vSphere.
Deltacloud will help avoid the prospect of vendor lock-in by creating an abstraction layer that will allow cloud application developers to make one app (or stack of apps) that will hit the Deltacloud layer, which will in turn operate smoothly on whatever cloud service that’s running underneath, be it EC2, RevM, or any other cloud API/infrastructure.
All of these chapters of the hybrid cloud story means that vendor lock-in can be avoided, since the open-source Deltacloud will have code to download and a pre-existing community infrastructure in place.
More importantly, this may mark the first time that an open source initiative has taken an early lead in a major area of technology. For all other areas, operating systems, middleware, SOA, and virtualization, open source has done very well, but only after propietary technologies had started down the path. Now, by jumping in and using the hybrid cloud tools to remove the threat of cloud-vendor lock in, Red Hat may have set the standards for all future cloud development. And, it’s nice to say, those standards will be truly open.
And if that’s not a happily ever after, I don’t know what is.
- Dent Introduces Industry’s First End-to-End Networking Stack Designed for the Modern Distributed Enterprise Edge and Powered by Linux - 2020-12-17
- Open Mainframe Project Welcomes New Project Tessia, HCL Technologies and Red Hat to its Ecosystem - 2020-12-17
- New Open Source Contributor Report from Linux Foundation and Harvard Identifies Motivations and Opportunities for Improving Software Security - 2020-12-08