A lot of people talk about commodity servers, especially where and when to use them, and many have good reasons one way or the other. However, not everyone has a good definition of what makes a commodity server (or platform), and that can lead to confusion.
Today on RSTS11, I’m going to look at contexts where a commodity server or platform is important, how I define the concept, and what you might find when getting into a discussion about commodity servers.
What’s a commodity server?
My definition of a commodity server is a piece of fairly standard hardware that can be purchased at retail, to have any particular software installed on it.
I further define ‘fairly standard’ to mean an industry standard platform that does not require custom coding to implement an operating system on. I define ‘purchased at retail’ to mean that you can call/email/visit the website of a vendor and acquire the hardware without a pre-existing contract or design process (vs OEM/ODM arrangements).
Some examples of commodity servers would be any server you can order from Supermicro (or its integrators), HP, Dell, Cisco, Lenovo, or various other maintream vendors. If there’s a “Buy now” button next to it on their website, and you can order it with a credit card right then and there, it’s probably commodity. These are sometimes called “Industry Standard Servers” but there may be some distinctions between the two concepts. And in theory, blade servers could count (since they don’t require anything custom other than the chassis) but I generally don’t think of them in the category.
The goal for a platform that’s based on commodity servers is that you’re not tied into a given brand of server, or seller of servers, for acquiring your hardware. If a vendor fails you (i.e. trying to force you to buy service contracts or extra licensing for basic functionality and maintenance), you can go to another vendor for the servers, and as long as you specify components (cpu, memory, disk, network) properly, you’re good to go.
Where would I use a commodity server?
One place commodity servers are often discussed is in Hadoop clusters. Hadoop was designed, on one level, to be the RAID of compute farms. You use inexpensive, homogeneous servers that can be easily replaced, with software that can handle losing a few servers at a time.
There is a not-uncommon misgiving about Hadoop’s node model; namely, that using branded servers is somehow counter to the nature of Hadoop. The impression some folks get is that since Hadoop doesn’t care about any given server (at least for datanodes and tasktrackers), you have to go with the cheapest possible hardware, possibly even building it yourself. Those folks believe that spending money to have someone else build the servers for you, or going with a brand name server, is a bad thing.
I see their perspective, in a sense. If you have a team of people who can maintain your servers at that level, rebuild them when they fail, and keep track of component versioning and compatibility and firmware levels, that’s great. Larger environments (Yahoo, Google, etc) may have this, but your typical environment with fewer ops people than Google has chefs can’t hold up under those considerations.
On the other hand, if you pick a brand of servers, you’re more likely to have consistent configurations, support mechanisms, warranties, remote management, firmware updates, and so forth.
Mind you, some vendors do change hardware or firmware in mid-release without telling anyone (even Apple’s done it a few times), and no vendor has perfect support or perfect firmware.
But the advantages to focusing on what your team can do (deploying and supporting applications and platforms, satisfying your users), and letting others do the stuff that’s not in your core (building servers, stocking hard drives and memory by the ton, making bezels), should be pretty obvious if you can’t allocate a full team to the latter.
Where else would I use a commodity server?
#2: Storage platforms.
A Twitter friend who works for a VAR was asking about scalable storage platforms that run on commodity hardware. Think a mixture of Nexenta, Nutanix, and VMware Virtual SAN (a.k.a. vSAN), but not any of those particular ones for reasons that may become evident (they don’t have to, as they’re his requirements, not ours, of course). One of the first recommendations was Nutanix “because it runs on commodity hardware.”
There are a lot of virtualization and storage vendors whose platform is based on a commodity hardware base. However, I don’t consider them a commodity platform unless I can choose the server to run on (within reason, of course… I’ll keep my Macintosh Quadras in storage for this project).
I completely understand why companies use, for example, Dell Poweredge R-series servers (or Intel reference chassis back in the day). You can buy them in bulk, don’t have to do the interoperability testing for standard hardware and firmware, parts and maintenance are easy to arrange, and they tend to have a reasonable shelf life. In case of an emergency, you can buy one that has the OEM’s bezel rather than yours. And you can test your solution (and iterate on your logo and company name a few times) before investing in your own bezels anyway.
And if you’re a VAR wanting to deploy a solution on the hardware your company has a particularly good relationship with, or a warehouse full of off-lease hardware from, or just a company your client prefers, the model that Nutanix or Nimble Storage or Pivot3 uses wouldn’t work for you. That doesn’t mean their model (which is far from uncommon in the rackmount appliance world) is bad or wrong, it’s just not a fit in this case.
Speaking of cases, one of the concerns that came up was needing short chassis. Sometimes you have a customer needing short cases (think Rackable half-depth, for example), or maybe a desktop-looking platform for SOHO/ROBO/POHO.
So we’re left looking for something that is readily available like Nexenta, serves out multiprotocol storage like Nutanix, scales out like Nutanix and vSAN, but isn’t tied to VMware like vSAN. In this context, while the definition is like Michaelangelo’s model–simply chip away anything that doesn’t look like our scalable platform on our choice of hardware.
So where do we go from here?
The conversation on Twitter led us toward Maxta, and toward Nutanix being curious about what form factor my friend was looking to meet. While it’s outside the scope of this post, if you have other suggestions for fellow readers of RSTS11, feel free to suggest them below.
If you have thoughts on commodity servers, or questions about anything up there, feel free to chime in as well.
Disclaimer (I love these things): I have friends and acquaintances at most of the companies mentioned above. However, my paraphrasings and overgeneralizations should be taken in context, and not as representing any official positions or standards of any of them.
And today’s pithy tweet: