When a Facebook user ‘likes’ something, adds a friend or uploads a photo gallery, he doesn’t necessarily think of what goes on at the back end. That ever-mounting pile of information collected each second from millions of users presents a significant challenge to efficient data storage and management – not to mention a potentially daunting financial and environmental cost.
To address these issues, Facebook engineers have designed their own custom servers and datacenters that cut costs 24 percent and energy use by 38 percent compared with traditional commercially available infrastructure, said Amir Michael, leader of Facebook’s storage hardware team. And the company believes even more savings are possible through the collaborative development process, he said.
With the Open Compute Project, Facebook is now sharing its design specifications and seeking input and ideas from the engineering community in an effort to boost those savings.
“It’s time we stop thinking about this type of infrastructure as proprietary,” Michael said in his keynote talk Tuesday at the Linux Collaboration Summit in San Francisco. “Let’s build this together.”
Open source hardware presents some unique challenges compared with open source software because it requires a factory for product development, Michael said. But Open Compute does model the open source software development process, maintaining a mailing list and holding developer summits. An incubation committee forms projects, creates a charter and then an advisory board member sponsors the project to make sure there’s momentum and a deliverable behind it.
“It’s not just about ideas… we actually wanted to build things and take designs to actual hardware,” Michael said.
Now about a year old, Open Compute has an impressive list of contributors, including Dell, Mellanox Technologies and Cloudera. But they’re looking for more partners to advance the project.
The project’s top priority is increased efficiency, in part by reducing server complexity. The things that differentiate a Dell from an HP server “aren’t really innovation,” Michael said. Getting rid of those peripheral features creates operational efficiency.
Scalability is also important in considering a project’s potential. Open Compute aims to build hardware for large-scale datacenter deployments.
The best way to get involved, Michael said, is to become a member and join one of Open Compute’s six working groups focused on storage, interoperability, systems management, datacenter design, motherboards or power infrastructure.
The Open Compute Foundation is structured so that no single vendor has outsized influence on the direction of the project and no member dues are collected, he said. Instead, the Open Compute Summit serves as a fundraiser for their efforts. Interested engineers are encouraged to attend the upcoming summit, set for May 2-3 in San Antonio.
- Dent Introduces Industry’s First End-to-End Networking Stack Designed for the Modern Distributed Enterprise Edge and Powered by Linux - 2020-12-17
- Open Mainframe Project Welcomes New Project Tessia, HCL Technologies and Red Hat to its Ecosystem - 2020-12-17
- New Open Source Contributor Report from Linux Foundation and Harvard Identifies Motivations and Opportunities for Improving Software Security - 2020-12-08