High-availability (HA) Linux is increasingly being used to help companies meet market demands for fast-paced R&D and shorter product cycles. The medical industry, for example, is using server clusters to model the effect of drugs, conduct gene sequencing and develop personalized medication. Large telcos, banks and stock exchanges, ISPs and government agencies also rely on HA Linux to ensure minimal service disruptions in their mission critical workloads.
There’s hardly a business out there today that can tolerate user-facing downtime. Outages not only mean the immediate loss of revenue, but damage credibility and result in future losses as well.
In short, just about any systems administrator, regardless of industry, can benefit from using a high availability architecture. The transition from legacy systems to HA Linux can be daunting for IT architects, however. With such business-critical systems it’s important to get it right, while also being efficient and cost-effective.
Choosing the right servers with advanced RAS (reliability, availability, serviceability) features, such as an x86 processor-based architecture, and an operating system with high-availability capabilities such as Linux, are the first steps toward implementing a new HA architecture, says Intel engineer Anil Agrawal.
He also recommends seeking knowledge and training, such as that offered in the Linux Foundation’s new High Availability Linux Architecture corporate training curriculum. Here, Linux Foundation course instructor Florian Haas explains why a HA Linux stack is worthwhile, what you’ll learn in the course and what skill level is needed to get the most out of the experience.
What are the benefits of high availability Linux over other platforms?
The high availability stack on Linux is not only on par with, but exceeds commercial competitors in features and reliability in many places. It has seen some heavy backing from industry heavyweights and some pretty significant adoption recently. SAP recently embraced the SUSE Linux high availability extension as their default HA stack on Linux, for example, which is a testament to the reliability of the stack.
Could you give an example of how high availability Linux is an improvement over alternatives?
The Linux HA stack is completely storage agnostic, for example. It can plug into a system based on a storage area network (SAN), software-defined storage (SDS) or application-based replication. Whereas many commercial or non-open source HA stacks tend to be married to a specific idea or concept on how you should build your storage. Other recent enterprise features are site-to-site fail-over and geographic redundancy.
Red Hat Enterprise Linux 6, for example, is optimized for Intel Xeon processor-based servers and includes major enhancements in memory management and scheduling that help to provide more efficient performance on large, multi-processor systems. Its NUMA support enables each core to make optimal use of fast, nearby memory to minimize latencies, while also supporting efficient memory sharing among all cores. And its use of control groups ensures that multiple applications or processes running on the same physical server all receive the CPU, memory, network, and storage resources they need. Built-in features also include policy-based resource management, high-availability clustering, advanced error management, and predictive failure analysis — costly add-ons in a proprietary UNIX environment.
In terms of quantifiable metrics, an open source stack gives you far better hooks and gauges to monitor the system.
HA Linux is 100 percent open source from storage through cluster management, all the way to the application. We can build applications that are highly available over the long run on 100 percent open source software.
What does the course cover?
You’ll get a general familiarization with the concept of high availability and the tools that the Linux high-availability stack offers to achieve and deploy HA systems. Then it covers several common high-availability scenarios such as an HA relational database and virtualization, site-to site fail-over, geographic redundancy, and open source software-defined storage solutions, including both GlusterFS and Ceph. (For an introduction to setting up geographical redundancy in high-availability clusters watch the Linux Foundation’s free tutorial video. Registration is required.)
It also familiarizes attendees with the automation, monitoring and deployment of high availability environments. The design of high availability systems goes beyond planning and deploying a certain set of servers in a given way, but includes networking, virtualization, storage, physical redundancy and other topics.
Who should take this course?
It’s aimed at systems administrators, with intermediate to advanced Linux sysadmin skills. It’s an expert-level course designed for those with several years of practical experience working on Linux or Unix systems.
The LInux Foundation’s next high availability Linux architecture course will be held Dec. 16-19 online. Enroll now.
- Dent Introduces Industry’s First End-to-End Networking Stack Designed for the Modern Distributed Enterprise Edge and Powered by Linux - 2020-12-17
- Open Mainframe Project Welcomes New Project Tessia, HCL Technologies and Red Hat to its Ecosystem - 2020-12-17
- New Open Source Contributor Report from Linux Foundation and Harvard Identifies Motivations and Opportunities for Improving Software Security - 2020-12-08