With the rise of white box environments network equipment is being liberated from captive pre-installed network OSes. You can now choose your bare metal switch hardware and operating system separately and then change it trivially later. Surely this creates chaos and a compatibility nightmare I hear you say. Continue reading
Installing SaltStack can be a pain as there is some initial installation and configuration needed before Salt can actually be used. This post shows how you can install and operate Salt on a Cumulus Linux switch without ever needing to log into the switch directly. The operations shown below can easily be expanded to install many switches at once. Continue reading
Read our feature in The Stack about the impact the Open Compute Project is having on data centre innovation:
“Major players are coming together to innovate efficient data centres at a pace faster than ever seen before”
Containerised datacentre – it’s available at all good datacentre trade shows and certainly made a splash pre-2010, but have you actually seen one in the wild? Continue reading
White box from the beginning
First, let us define a white box server. A white box server (sometimes referred to as a beige box) is a machine without a well-known brand name associated with it. White boxes are usually made en masse by Asian original design manufacturers (ODMs) such as Quanta, Wistron, Inventec and Wiwynn. They are also produced by system integrators who build systems assembled from parts purchased separately to create bespoke systems.
OK, so where do black boxes come from? In the traditional IT procurement model, enterprise customers buy from original equipment manufactures (OEMs) such as HPE, Dell and IBM. The OEMs in turn outsource the manufacturing of hardware to the ODMs. It is at this point you are buying a branded, closed, black box server.
Today we are looking at MaaS from Canonical, the makers of Ubuntu. MaaS, which stands for Metal as a Service, promises to automatically and dynamically provision your servers. It’s the same idea as cloud provisioning, but now with your own bare metal servers. Very exciting indeed, so we thought we would give it a try. Continue reading
Chances are, if you have thousands of servers, you are running some sort of hyperscale environment. But is your monitoring hyperscale-friendly?
In the beginning you might well have had 10 servers all running business-critical applications. You dutifully monitored everything on the server. Well, you dutifully monitored after you have had too many issues with no monitoring at all.
Then, over time, each new outage brought you a new set of checks and before you knew it, your boxes became extremely monitored. They have a multitude of ways to set the pager off. Your monitoring strategy continues like this as your server farm grows. Before you know it, you have hundreds, maybe even thousands, of machines with very basic false positive monitoring. Continue reading