Ready for prime time was how Andrew Geissler, an IBM senior software engineer, was describing OpenBMC’s development at this year’s 2018 Open Compute Project Summit. Essentially, a lot of work has been put into fixing bugs and the project is now ready for general use. To this end it has been already deployed by multiple companies in datacentres worldwide.
For those who don’t know, OpenBMC is a Linux Foundation open-source project written in C++ and Python. It has a goal to produce a baseboard management controller which is open source and can operate in heterogeneous deployments. Deployments ranging from enterprise, HPC, telco to cloud. Well actually it’s official goal, as taken from the readme, is “to create a highly extensible framework for BMC software and implement for data-center computer systems.” The founding members of the project are Microsoft, Intel, IBM, Google and, of course, Facebook.
BMCs in general are controllers that monitor the state of hardware and are typically found in the main circuit board of the device. It is often in the form of a SoC and they enable monitoring and management of your hardware eg health (like temperature/fan speeds), event logs, and remote management capabilities. These are essential for today’s remotely deployed servers.
OpenBMC is welcome news as, until now, you were essentially locked into your hardware vendor of choice and hope they created good enough BMC firmware. Either way, you probably had to maintain a few different variants. At the OCP Summit it was clear that if you want to run Project Olympus hardware you need OpenBMC to run the board. This is because currently Microsoft licenses firmware from Intel that it can not open source along with the hardware.
On a customer level, some of the functions that have recently received some love are:
- Moving from yaml to json
- Ipmi – now has it 2.0 compliant
- Full dcmi support
- Web interface
The coming soon list is as follows:
- VGA mirroring
- KVM over ip
- Adv user management – eg ldap
- Remote media
- SNMP / telemetry
- inboard firmware update
- Se Linux / security enhancements
Yesterday Hyperscale IT featured on the BBC news website, giving input around running legacy parts and systems:
The long legacy of the floppy disk
Although legacy systems can still function well in certain circumstances, it’s easy to find yourself in the situation where parts become hard to find and upkeep can become expensive.
Open compute meets low CO2 plastics
At the Open Compute Project (OCP) Summit last month we spoke to Simon Huang, general manager from JPSeco (Jean Parker & Sons Corp). JPSeco have been working on a Natural Fiber Reinforced Plastic (NFRP) which has very low CO2 emissions compared to traditional plastics that are used in servers today. Continue reading
White box from the beginning
First, let us define a white box server. A white box server (sometimes referred to as a beige box) is a machine without a well-known brand name associated with it. White boxes are usually made en masse by Asian original design manufacturers (ODMs) such as Quanta, Wistron, Inventec and Wiwynn. They are also produced by system integrators who build systems assembled from parts purchased separately to create bespoke systems.
OK, so where do black boxes come from? In the traditional IT procurement model, enterprise customers buy from original equipment manufactures (OEMs) such as HPE, Dell and IBM. The OEMs in turn outsource the manufacturing of hardware to the ODMs. It is at this point you are buying a branded, closed, black box server.
Hyperscale IT will be attending the Open Compute Summit in California next week. Held on 9-10 March at the San Jose Convention Center, this will be the 7th Summit-type event. Set clocks to PST time! Continue reading
Recently we had the need to install a number of Ubuntu boxes for a client. Whilst our usual home-grown scripts with Kickstart and Salt work just fine, they are not the most intuitive to hand over or maintain. So we used Canonical’s MaaS or Metal as a Service. We are talking tens of machines, not thousands, so we created a single node MaaS installation. This gives flexibility for future deployments, as MaaS can install a number of different operating systems.
This is a walk through of what turned out to be a very quick and easy install. Now we can install servers on
mass maas. Continue reading Today we are looking at MaaS from Canonical, the makers of Ubuntu. MaaS, which stands for Metal as a Service, promises to automatically and dynamically provision your servers. It’s the same idea as cloud provisioning, but now with your own bare metal servers. Very exciting indeed, so we thought we would give it a try. Continue reading
Are you still running down to the datacentre to install your Linux machines with a CD? Is it still taking you hours to install a new machine? You can’t be a hyperscaler if you don’t automate your server installs. Continue reading