rsts11: Building my compact VMware server at home

About a year ago I bought a homebuilt Intel Core i7 (1st generation) desktop from a friend to run VMware ESXi on. He had gone to the trouble of assembling the system with a beautiful Gigabyte motherboard, and getting 4.1 to run on it, and I got a good deal on the system with 6GB of RAM and a 2TB hard drive.

I upgraded to 12GB, then to 24GB, but never put it into use.

Two months ago, I started it up and ran some computationally intensive software on it and discovered it was munching 320W. And it’s a mid-tower size case. Somewhat unwieldy for an apartment with a few other computers already running, and a significant other who doesn’t appreciate a living room that resembles a small colo.

It gets… smaller…

About that time, I think it was Kendrick Coleman who mentioned a new Shuttle barebones XPC system, the SH67H3, that in typical XPC form factor supported a second generation i7 processor and 32GB of RAM. Four slots of DDR3. Problem was threefold.

1) Shuttle on the VMware HCL? Unlikely.

1a) Onboard LAN and SATA controllers supported? Almost as unlikely.

2) 8GB DIMMs were expensive. And how could I in clear conscience run a system capable of 32GB with just 16GB of RAM?

3) Have you seen my holiday credit card bill?

So I was willing to risk 1a, live with 1 (as I’m not buying support or expecting it), and wait out 2 until memory prices came down.

Once 3 was resolved, I emptied my wallet into the cash register at Central Computers and bought the SH67H3 barebones XPC, and an i7-2600s (low power) processor. I had a pair of 2GB DDR3 DIMMs to use until I could upgrade, so I went about installing. I hung a SATA DVD drive off the system and installed ESXi 5 to the flash drive, and all went well.

Well, not quite.

Turned out one of the two DIMMs was bad, keeping the Shuttle from taking off, so to speak. Brief monitor sync and then it went out of sync, no beeps, no signs. I tried one DIMM, it worked; tried the other, it didn’t. Swapping the DIMM slots didn’t help. So I booted with one DIMM, 2GB, the minimum to run the ESXi installer.

No dice.

Turns out system reserved memory and/or shared video RAM managed to pull me under 2GB, and the installer quit on me.

So I realized I had 6 4GB DIMMs in the old VMware box, and I pulled two to get the Shuttle system going. Bueno. Just short of 10GB and it installed pretty well. The Shuttle disk and network were supported under ESXI 5.0.0 without any additional effort.

It got… better…

By the time this happened, I found some 8GB DDR3 DIMMs on Amazon from Komputerbay. These were not on the Shuttle compatibility list, but they were less than half the price, so I took a fortuitous risk. I’ve bought memory from them before, for the last ESXi server I built (at my last job), so I was willing to try out a pair. The memory was $58/stick, and I paid $10 for expedited shipping (twice, as I bought two pairs separately just in case).  They worked fine, survived memtest86+, and made me happy.

I added a 4GB Onyx flash drive from Maxell, a very low profile drive that hides on the back of the system, to install the hypervisor onto. (Picture shows it in an extension USB pod, to show how much it sticks out. It actually fits in a regular soda bottle cap.)

For disk storage, I put a four-drive SATA enclosure in the 5.25″ half-height bay, and occupied the two SATA3 and the two SATA2 ports on the motherboard. The first bay got a 50GB SATA2 SSD I had on hand, for the initial datastore, and the second has a 500GB 7200RPM SATA disk.

I’m almost embarrassed to admit that the first VM I built on this system was Windows 7 Professional, but it was. And it worked pretty well.

Then the little one spilled a handful of change behind an electric plug and blew up the circuit breaker, while I was away from home… so it’s been on hold for a little while.

What’s in the box?

I bought the following new:

  • Shuttle SH67H3 barebones ($240 at Amazon)
  • Intel Core i7-2600S processor, retail box ($300)
  • 4x Komputerbay 8GB DDR3 RAM ($53 per stick, $212 total)
  • Four-drive 2.5″ SATA cage ($71)
  • Intel PCIe x1 Gigabit Ethernet adapter ($40)

The following came from stock.

  • 4GB Maxell Onyx flash drive ($9)
  • 50GB OCZ Vertex 2 SSD ($126, much more when I bought it)
  • 500GB 7200RPM SATA drive ($120 today, much less when I bought it)

So to build the whole mess today, I’d pay about $1,118 plus tax and sometimes shipping.

What’s next, Rob?

Well, I’m going to be a bit limited by 4 2.5″ drive bays, although I will probably put some more drives in there. I have some 32GB SSDs that are gathering dust, and a couple of 500GB disks, so we’ll see how that goes. The Patriot Pyro SSDs are coming down in price (after rebate) at the local Fry’s store, so maybe I’ll make use of the SATA3 channels.

But for now, my next step is going to be a home NAS (that I threatened to do a while back) starting from an HP N40L Microserver. The Microserver, and its 8GB of DDR3 ECC RAM, came in last month. FreeNAS 8 is currently running on this system,  with an internal USB flash drive, although I’m tempted by OpenFiler’s ability to serve as a fibre channel target.

I will probably put the 8GB of RAM back into the mid-tower VMware box and use it as a second node, put some multiport cards into both ESXi servers, and power up a Summit 400-48T switch for the backbone of my virtualization network. I’m still watching for absurdly affordable PCIe 10GB Ethernet cards (since my Summit 400 has two 10GBE ports), but all I have for now is PCI-X, and only one of the three involved machines has even PCI.

I also now have a second location for lab equipment, as you may see in my write-up of the new store I’m starting. So the old desktop, and probably a Fibre Channel-enabled OpenFiler on a small SAN, will go over there. I can replicate across a 20ms latency link once, and have a pretty valid test environment for anything I’m likely to do.

Random thoughts

The LAN (RTL 8111E Gigabit Ethernet), and SATA onboard on the SH67H3 were supported out of the box, no oem.tgz tweaking needed. I had an Apricorn PCIe SSD/SATA3 controller that I plugged in with the SSD, but it wasn’t recognized by ESXi, so I went forward with the drive bay options.

I haven’t tried the SATA RAID on this system. I wouldn’t expect it to be supported, and I’d be inclined to use FreeNAS or OpenFiler or NexentaStor Community Edition to handle resilience, rather than the onboard RAID. If I get a chance, I’ll configure a pair of disks under the onboard RAID just to see how it works, or if it works. But it’s not a long-term prospect for this project.

Other people doing similar stuff

My friend Chris Wahl just put together his home whitebox systems. He went a bit more serverward, and he’s going with a pre-constructed NAS from Synology (which was tempting for me).

Kendrick Coleman wrote about his “Green Machines” project for his lab, and has built out a bit more (and put a bit more detail into his shipping list).

Simon Gallagher of vinf.net fame is well known in Europe for his vTARDIS projects, virtualizing virtualization inside virtual machines. Or as Nyssa said in Castrovalva, it’s when procedures fold back on themselves. I was reading about this, and doing a little bit of it on a quad core desktop at my last job, so I think he gets credit for my thinking about this scale of virtualzation in the first place.

23 thoughts on “rsts11: Building my compact VMware server at home

  1. Stumbled upon your blog (Google Search) looking for a VMWare ESXi 5 lab setup. Enjoyed your post and now after looking at the various “white box” setups, including ones you linked to, I think I’m going to go the same route you did. Totalled up the costs and it comes to around $600.

    Just curious how you like it so far? Are there any features that you can’t support with this setup?

    My plan would be to eventually put together two of these in a mini-cluster.

    Like

    • I’ve been distracted lately with a new store and heavy workload at work… so haven’t really heated up the home labs… but so far I’m only missing remote management and power control. There’s an HP remote management card for the ProLiant MicroServer for about $60, and I have a network power controller for four outlets so I can get the remote power going. But if something goes haywire with the hardware on the Shuttle while I’m out of town, I’ll be out of luck. 🙂

      I’m thinking a pair of dual Opteron 275 boxes with 16GB Reg ECC each will be going on the other end of the pseudo-WAN. Storage is likely to be an Atom D525 with a USB3 or eSATA disk on it. At least until I get some cheap FC disks for my new SAN 🙂

      Definitely appreciate the feedback, and would love to hear which options you went with and what memory/storage options you chose.

      Like

      • Thanks for the quick reply! I’m not really worried about remote management as I’ll just be messing with it at home but I like your suggestion. I’m more concerned with supporting VMWare features such as Fault Tolerance, VMotion, etc. I work in Sacramento, CA for a public utility and we’re getting ready to upgrage this Summer from 4.1 to 5.0 and I want something to play/learn with beforehand. Right now I’m using Workstation 8 but that has it’s limits. I’ve been going through TrainSignal VMWare 5 training and eventually will go through the VMWare course so I can test out for VCP 5.

        Like

    • You’ll have to wait a little while, as I doubt SiteSurvey works with single servers.

      I can tell you that DirectPath I/O is supported. The rest will have to wait until I can get another box or two online.

      Like

  2. really informative blog, glad i stumbled onto it. new to building a “whitebox” to run ESXi. wondering does these ESXi boxes contain the disk storage where the VM’s will reside? or is storage external like on a qnao or synology box?

    if some one could enlighten me on this aspect that would be great. thx

    Like

    • You can do either way. I have a 4×2.5 bay as mentioned in the posting, so I could in theory have 8tb (or more) in this system, but for now it’s just got a SSD for a couple of small VMs. The home nas mentioned in the posting is intended to serve datastores over iSCSI, and to separate out that responsibility (and make a cluster possible). Depends on what gear you have and how much you want to expand your server farm.

      Like

    • Thanks! I have not tried the onboard RAID, would be more inclined to use in-VM software raid if I needed to. But for now I still have only one SSD in the machine. Something like a Nexentastor CE VM for example.

      I am feeling curious about the USB3 onboard… NexentaStor does not support USB3 but if I could feed a USB3 Drobo to VMware and serve it through that…

      Like

  3. This was great, thank you. My home esxi 4.1 lab has been running for years 24×7 on a dell quad core desktop.
    It is time to upgrade before hardware failure and your plan should keep the price about what I paid for that dell when it was new.

    Like

  4. Pingback: Found Out There – 2013-02-03 « rsts11 – Robert Novak on system administration

  5. Pingback: Upgrading my home VMware lab (part 1: Ivy Bridge) #rsts11 | rsts11 - Robert Novak on system administration

  6. Pingback: NUC NUC (Who’s there?) VMware lab #rsts11 | rsts11 - Robert Novak on system administration

    • Hi Ben,

      As this is a desktop system, you would need an external IP KVM if you need that functionality. I would not recommend it if you need to colocate your home lab or would not normally have access to it on a regular basis.

      Like

  7. I just built a machine with specs very similar to these (thanks to your inspiration!) and will be blogging about it soon. Thanks for the writeup! It turns out that at least in ESXi 5.1, *all* of the SH67H3’s built-in hardware works with no additional customization required. The NIC, SATA controllers, all work fine. The one problem is that write speeds on those built-in SATA controllers are abysmal. I threw an IBM ServeRaid M1015 RAID controller in there (staple of the server hobbyist set, as I understand it) and things are better now, but still not perfect. I’m not sure if I want to incur the expense of adding SSDs just yet.

    One question: Is there any way to get vMotion, cloning, templates, etc. working if you don’t have an “official” commercial license from VMWare? Is there any hobbyist license or educational program?

    Like

    • Glad to hear this post was useful. My original goal was to use the SH67H3 for compute only, and use shared NAS or SAN storage for the actual vmstores.

      As far as using vSphere itself–there has been an effort to get the VMTN Subscription program going again (see http://communities.vmware.com/thread/335123) but there’s been no visible progress on that in a year and a half now, and Mike Laverick says they’ve probably scrapped the effort. They used to sell a subscription which gave you a year’s worth of lab licensing for learning and evaluation, similar to MSDN or Technet from Microsoft.

      For now, mostly people create new email addresses to sign up for a trial, and just reinstall their whole lab every 60 days. There’s work on the HOL/NEE project (see http://labs.vmware.com/nee/) which provides an online lab resource, but it’s obviously not the same.

      Like

    • You mentioned that you have fitted the IBM ServeRaid M1015 RAID controller into your Whitebox ESX server. Am I right to think you have built your box using the Shuttle SH67H3? in this case what is the version number of your motherboard is it v1.0 or 2.0?
      I have a similar setup on a v1.0 motherboard and I have tried fitting a Dell Perc H310 controller into my SH67H3 shuttle but it does not seem to work, the motherboard just does not start up. I asked Shuttle and they said oh! they have not tested this Dell Raid controller and left it there.

      Currently I have an older HP DL115G5 server fitted with 4 x ITB SATA drives on a Freenas v9.1 configured as RAID0 but I want to keep the power, noise and heat down and want to run everything from my Shuttle box with a Virtual SAN line Openfiler or Nexenta or the HP VSA for the shared storage. But in order to consolidate the SAN into the SHuttle I need to have a hardware RAID controller to create a RAID 10 Pool using my 4 x 1TB drives and then create a VSA for shared storage to give good speed. I have done something similar at work on a DL380 fitted with 8 SAS drives and the performance is reasonably good.

      I am really keen to find out which RAID controllers work on the SHuttle SH67H3 v1.0 motherboard using the x16 PCI-e slot.

      Thanks

      Like

  8. Pingback: ESXi at Home: an Exercise in …

  9. Pingback: Updated home lab considerations #rsts11 | rsts11 – Robert Novak on system administration

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.