Overkill in the rsts11 lab workshop – a homelab update for 2017

After being chosen as a VMware vExpert for 2017 this month, I was inspired to get working on refreshing my vSphere “homelab” environment despite a busy travel month in late February/early March. This won’t be a deep technical dive into lab building; rather, I just wanted to share some ideas and adventures from my lab gear accumulation over the past year.

As a disclosure, while I do work for Cisco, my vExpert status and homelab building are at most peripherally-connected (the homelab at home connects to a Meraki switch whose license I get an employee discount on, for example). And even though I’m occasionally surprised when I use older higher end Dell or HP gear, it’s not a conflict of interest or an out-of-bounds effort. It’s just what I get a great deal on at local used hardware shops from time to time.

The legacy lab at Andromedary HQ

Also read: New Hardware thoughts for home labs (Winter 2013)

C6100

Stock Photo of a Dell C6100 chassis

During my last months at the Mickey Mouse Operation, I picked up a Dell C6100 chassis (dual twin-style Xeon blade-ish servers) with two XS23-TY3 servers inside. I put a Brocade BR-1020 dual-port 10GBE CNA in each, and cabled them to a Cisco Small Business SG500XG-8F8T 10 Gigabit switch. A standalone VMware instance on my HP Microserver N40L served the vCenter instance and some local storage. For shared storage, the Synology DS1513+ served for about two years before being moved back to my home office for maintenance.

The Dell boxes have been up for almost three years–not bad considering they share a 750VA “office” UPS with the Microserver and the 10Gig Switch and usually a monitor and occasionally an air cleaner. The Microserver was misbehaving, stuck on boot for who knows how long, but with a power cycle it came back up.

I will be upgrading these boxes to vSphere 6.5.0 in the next month, and replacing the NAS for shared storage throughout the workshop network.

The 2017 Lab Gear Upgrades

For 2017, two new instances are being deployed, and will probably run nested ESXi or a purpose-built single-server instance (i.e. an upcoming big data sandbox project). The two hardware instances each have a fair number of DIMM slots and more than one socket, and the initial purchase for each came in under US$200 before upgrades/population.

You may not be able to find these exact boxes on demand, but there are usually similar-scale machines available at Weird Stuff in Sunnyvale for well under $500. Mind you, maxing them out will require very skilled hunting or at least a four figure budget.

2017-01-16-20-05-27

CPU/RAM cage in the HP Z800

First, the home box is a HP Z800 workstation. Originally a single processor E5530 workstation with 6GB RAM, I’ve upgraded it to dual E5645 processors (6-core 2.4GHz with 12MB SmartCache) and 192GB DDR3 ECC Registered RAM, replaced the 750GB spinning disk with a 500GB SSD, and added two 4TB SAS drives as secondary storage. I’ve put an Intel X520 single-port 10GbE card in, to connect to a SFP+ port on the Meraki MS42P switch at home, and there are two Gigabit Ethernet ports on the board.

2017-02-12-14-54-28

CPU/RAM cage in Intel R2208 chassis

And second, the new shop box is an Intel R2208LT2 server system. This is a 2RU four-socket E5-4600 v1/v2 server with 48 DIMM slots supporting up to 1.5TB of RAM, 8 2.5″ hotswap drive bays, and dual 10GbE on-board in the form of an X540 10GBase-T dual port controller.  I bought the box with no CPUs or RAM, and have installed four E5-4640 (v1) processors and 32GB of RAM so far. There’s more to come, since 1GB/core seems a bit Spartan for this kind of server.

There’s a dual 10GbE SFP+ I/O module on its way, and this board can take two such modules (or dual 10GBase-T or quad Gigabit Ethernet or single/dual Infiniband FDR interfaces).

The Z800 is an impressively quiet system–the fans on my Dell XPS 15 laptops run louder than the Z800 under modest use. But by comparison, the Intel R2208LT2 sounds like a Sun Enterprise 450 server when it starts up… 11 high speed fans warming up for POST can be pretty noisy.

So where do we go from here?

Travel and speaking engagements are starting to pick up a bit, but I’ve been putting some weekend time in between trips to get things going. Deploying vSphere 6.x on the legacy lab as well as the new machines, and setting up the SAN and DR/BC gear, will be spring priorities, and we’ll probably be getting rid of some of the older gear (upgrading the standalone vCenter box from N40L to N54L for example, or perhaps moving it to one of the older NUCs to save space and power).

I also have some more tiny form factor machines to plug in and write up–my theory is that there should be no reason you can’t carry a vSphere system anywhere you go, with a budget not too far above a regular-processor-endowed laptop. And if you have the time and energy, you can do a monster system for less than a high-end ultrabook.

 

Disclosure: Links to non-current products are eBay Partner Network links; links to current products are Amazon affiliate links. In either case, if you purchase through links on this post, we may receive a small commission to pour back into the lab.

 

NUC NUC (Who’s there?) VMware lab…

nuc-outside

VMware lab who? VMware lab in pocket-size format!

So in our last installment, I found out that I can upgrade my Shuttle SH67H ESXi servers to support Ivy Bridge processors. If you want to read more about that, feel free to visit my Compact VMware Server At Home post from Winter 2012, and my Upgrading my home VMware lab with Ivy Bridge post from Spring 2013.

The replacement boards came in from Shuttle, and they’ll be going back into the chassis. But as you may have seen at the end of the last post, I discovered the Intel Next Unit of Computing server line. The NUC line current includes three models.

  • DC3217IYE – i3-3217U processor at 1.8 GHZ dual core with 3MB cache), dual HDMI, Gigabit Ethernet at $293 (pictured)
  • DC3217BY – i3-3217U processor, single HDMI, single Thunderbolt,  – no native Ethernet – at $323
  • DCCP847DYE– Celeron 847 (1.1 GHZ dual core with 2MB L3 cache, dual HDMI, Gigabit Ethernet at $172
    (Prices are estimated list from Intel’s site–probably cheaper by a buck or ten at Amazon, Fry’s, Central Computer, or your favorite retailer. Feel free to use my links and help me buy the next one. 🙂 )

nuc-inside

All three have three USB 2.0 ports outside (one front, two rear), as well as two USB headers inside, conceivably useful for a USB flash drive or flash reader. They also have an mSATA-capable Mini-PCIe slot as well as a short mini-PCIe slot suitable for a WiFi/Bluetooth card. And there are two DDR3 SODIMM slots, supporting a reported 16GB of RAM (the processor supports 32GB, but the board/kit do not mention this). They all include VT-x with EPT.

I don’t see the Celeron being that useful for virtualization labs, but these are rather multifunctional for a little 4″ square computer. Imagine putting a broadband modem (3G/4G/Wimax) inside for reasonably potent portable kiosk purposes (VESA mount kit included). A card reader and a DVD burner for saving and sharing (and even editing) photos. Intel’s WiDi wireless display technology is supported as well, if you have a suitable receiver. Or use it with a portable projector for presentations on the go (no more fiddling with display adapters for presentations at your meetings!).

But we’re talking about a VMware lab here.

And let me get this out of the way… this was one of the coolest features of the NUC.


That’s correct, the box has its own sound effects.

Let’s get this party started…

Those of you of my era and inclinations may remember when KITT’s brain was removed and placed in a non-vehicle form factor on the original Knight Rider tv series. When I got ready to RMA my Shuttle motherboards, I was thinking about this sort of effect for a couple of VMs on the in-service ESXi server that was about to have its board sent to southern California. And that’s what I had to do. I couldn’t quite miniaturize the server Orac-style, but  that thought had crossed my mind as well.

So I picked up the DC327IYE unit at Fry’s, got an mSATA 64GB card (Crucial m4 64GB CT064M4SSD3) and a spare low profile USB flash drive (Patriot Autobahn 8GB (PSF8GLSABUSB)) at Central Computers, and took a Corsair 16GB DDR3 Kit (CMSO16GX3M2A1333C9) from my stock. Assembling it took only a few minutes and a jeweler’s screwdriver, and then I was ready to implement ESXi.

I backed up the VMs from the original system using vSphere Client, so that I could re-deploy them later to the NUC. Someday I’ll get Veeam or something better going to actively back up and replicate my VMs, but for the limited persistent use of my cluster (cacti and mediawiki VMs) this was sufficient.

One gotcha: Fixing the NUC network…

I originally tried reusing the 4GB usb drive my existing server was booting from, but it didn’t recognize the Ethernet interface. I installed a fresh 5.0u2 on a new flash drive, and still no luck. I found a post at tekhead.org that detailed slipstreaming the new driver into ESXi’s install ISO. I did so, installed again, and was up and running.

I did have to create a new datastore on the mSATA card — my original server had used a small Vertex 2 SSD from OCZ, which obviously wouldn’t work here. But I was able to upload my backed up OVF files and bring up the VMs very quickly.

And one warning I’ll bring up is that the unit does get a bit warm, and if you use a metal USB flash drive, it will get hot to the touch. My original ESXi lab box used a plastic-shelled USB drive, and I’m inclined to go back to that.

What’s next, Robert?

My next step is going to be bringing central storage back. There is a new HP MicroServer N54L on the market, but my N40L should be sufficient for now–put the 16GB upgrade in and load it up with drives. As those of you who saw my lab post last year know, it was running FreeNAS 8, but I’m thinking about cutting over to NexentaStor Community Edition.

I’ve taken the original Shuttle box and replaced a mid-tower PC with it for my primary desktop. I will probably set the other one up with a Linux of some sort.

And in a week or so I’ll grab a second NUC and build it out as a second cluster machine for the ESXi lab. All five of them are slated to go into my new EXPEDIT shelving thingie in the home office, and I’ll bring you the latest on these adventures as soon as they happen.

Upgrading my home VMware lab (part 1: Ivy Bridge) #rsts11

My most popular post on rsts11 has been my compact VMware server at home post. Thanks to Chris Wahl mentioning me on the VMware forums, and linking from his lab post, I see a dozen visits or more a day to that page.

Imitation is the sincerest form of laziness^wflattery

I have to admit that I’ve been a follower in my use of intriguing lab environments. I got the vTARDIS idea from Simon Gallagher, and built a version of it at work at my last job on a Dell Core 2 Quad workstation under my desk. Then I saw Kendrick Coleman tweet about this new SH67H3 from Shuttle that supported 32GB of non-registered RAM… bought one and put 32GB and an i7-2600S processor into it, as mentioned in the “server at home” post mentioned above.

Now as you may know, the i7-2600 series processors are now a generation behind. Sandy Bridge gave way to Ivy Bridge (the i7-3×00 processors) which are currently easily found at retail. But… SH67H3 v1.0 motherboards don’t support Ivy Bridge. And that’s what was shipping when I bought mine in early 2012.

I found an unbelievable deal on a second SH67H3 open (missing) box at Fry’s in February 2013… let’s just say I spent more on a basic Pentium chip to test it with than I did on the chassis itself. But alas, the second one also had a v1.0 motherboard.

Let’s make the ivy (bridge) grow!

I found sources on the Internets that said a v2.0 board supporting Ivy Bridge was out. I further discovered that Shuttle would trade in your v1.0 board for a v2.0 board for $40. Instructions here at Cinlor Tech’s blog if you’re interested in doing this yourself. Note that you can request the case number through Shuttle’s web-email portal if you prefer this to calling. That’s what I did.

sh67 corpsesI shipped off my two boards in a medium Priority Mail box to Shuttle on the 26th. On the 29th I got confirmation of the return shipment. They should be at my door on April 2nd. I’ll be reinstalling them, and at some point upgrading to the i7-3770s processors on both.

Waitasec, 2600 “S”? 3770 “S”? What’s this all about, then?

Yes, that’s correct. I chose to go with a low power version of the i7-2600 processor a year and change ago. The i7-2600s has a lower base speed than the 2600 or 2600k (unlocked version), 2.8ghz vs 3.4ghz. All three support turbo boost to 3.8ghz though. And the i7-2600s is 65W where the others are 95W.

(Here’s a comparison chart of the three i7-2600 and three i7-3770 processor options via Intel, if you’re curious.)

Other noteworthy differences are on the 2600k, which costs $20 more, but does not support VT-d (directed I/O), vPro management features, or Trusted Execution. VT-d is the only feature of particular concern when you’re building your virtualization lab though. (I’ll admit the VT-d was an accidental discovery–I chose the 2600s more for power savings than anything else). If you’re building a desktop, the “K” model has HD3000 graphics vs HD2000 for the other two chips, by the way.

Now that I’m building a second box, I find that my usual local retail sources don’t have the i7-2600s in stock anymore. I could order one on eBay or maybe find it at Fry’s, but for about the same price I could get the Ivy Bridge version and be slightly future-proofed. Once again, the “S” is the way to go.

The 3770 series run at 3.1ghz (“S”), 3.4ghz (3770), and 3.5ghz (“K”) base speeds, all turbo capable to 3.9ghz. The “S” processor is 65W again, vs only 77W for the other two chips. They all have Intel’s HD4000 integrated graphics and the newer PCIe 3.0 support. They support 1600mhz RAM speeds, vs 1333 top for the previous generation. The “K” processor lacks VT-d, vPro, and Trusted Execution, but does have a nearly $40 premium over the other two chips.

All six of these chips have VT-x including extended page tables (EPT/SLAT), hyperthreading, and enhanced SpeedStep. And they’re all 4 core/8 thread/32gb RAM capable processors that make a great basis for a virtualization environment.

nuc-scaleSo what’s next, Robert?

Well, with two matching machines, I’ll be basically starting from scratch. Time to upgrade the N40L Microserver NAS box to 16GB (Thanks Chris for finding this too!) and probably splitting off a distinct physical storage network for that purpose.

But now, thanks to Marco Broeken’s recent lab rebuild, I’ve been introduced to Intel’s Next Unit of Computing (NUC), so tune in soon for my experience with my first NUC system. Sneak peek of the ESXi splash screen and the actual unit here… stay tuned!

rsts11: Building my compact VMware server at home

About a year ago I bought a homebuilt Intel Core i7 (1st generation) desktop from a friend to run VMware ESXi on. He had gone to the trouble of assembling the system with a beautiful Gigabyte motherboard, and getting 4.1 to run on it, and I got a good deal on the system with 6GB of RAM and a 2TB hard drive.

I upgraded to 12GB, then to 24GB, but never put it into use.

Two months ago, I started it up and ran some computationally intensive software on it and discovered it was munching 320W. And it’s a mid-tower size case. Somewhat unwieldy for an apartment with a few other computers already running, and a significant other who doesn’t appreciate a living room that resembles a small colo.

It gets… smaller…

About that time, I think it was Kendrick Coleman who mentioned a new Shuttle barebones XPC system, the SH67H3, that in typical XPC form factor supported a second generation i7 processor and 32GB of RAM. Four slots of DDR3. Problem was threefold.

1) Shuttle on the VMware HCL? Unlikely.

1a) Onboard LAN and SATA controllers supported? Almost as unlikely.

2) 8GB DIMMs were expensive. And how could I in clear conscience run a system capable of 32GB with just 16GB of RAM?

3) Have you seen my holiday credit card bill?

So I was willing to risk 1a, live with 1 (as I’m not buying support or expecting it), and wait out 2 until memory prices came down.

Once 3 was resolved, I emptied my wallet into the cash register at Central Computers and bought the SH67H3 barebones XPC, and an i7-2600s (low power) processor. I had a pair of 2GB DDR3 DIMMs to use until I could upgrade, so I went about installing. I hung a SATA DVD drive off the system and installed ESXi 5 to the flash drive, and all went well.

Well, not quite.

Turned out one of the two DIMMs was bad, keeping the Shuttle from taking off, so to speak. Brief monitor sync and then it went out of sync, no beeps, no signs. I tried one DIMM, it worked; tried the other, it didn’t. Swapping the DIMM slots didn’t help. So I booted with one DIMM, 2GB, the minimum to run the ESXi installer.

No dice.

Turns out system reserved memory and/or shared video RAM managed to pull me under 2GB, and the installer quit on me.

So I realized I had 6 4GB DIMMs in the old VMware box, and I pulled two to get the Shuttle system going. Bueno. Just short of 10GB and it installed pretty well. The Shuttle disk and network were supported under ESXI 5.0.0 without any additional effort.

It got… better…

By the time this happened, I found some 8GB DDR3 DIMMs on Amazon from Komputerbay. These were not on the Shuttle compatibility list, but they were less than half the price, so I took a fortuitous risk. I’ve bought memory from them before, for the last ESXi server I built (at my last job), so I was willing to try out a pair. The memory was $58/stick, and I paid $10 for expedited shipping (twice, as I bought two pairs separately just in case).  They worked fine, survived memtest86+, and made me happy.

I added a 4GB Onyx flash drive from Maxell, a very low profile drive that hides on the back of the system, to install the hypervisor onto. (Picture shows it in an extension USB pod, to show how much it sticks out. It actually fits in a regular soda bottle cap.)

For disk storage, I put a four-drive SATA enclosure in the 5.25″ half-height bay, and occupied the two SATA3 and the two SATA2 ports on the motherboard. The first bay got a 50GB SATA2 SSD I had on hand, for the initial datastore, and the second has a 500GB 7200RPM SATA disk.

I’m almost embarrassed to admit that the first VM I built on this system was Windows 7 Professional, but it was. And it worked pretty well.

Then the little one spilled a handful of change behind an electric plug and blew up the circuit breaker, while I was away from home… so it’s been on hold for a little while.

What’s in the box?

I bought the following new:

  • Shuttle SH67H3 barebones ($240 at Amazon)
  • Intel Core i7-2600S processor, retail box ($300)
  • 4x Komputerbay 8GB DDR3 RAM ($53 per stick, $212 total)
  • Four-drive 2.5″ SATA cage ($71)
  • Intel PCIe x1 Gigabit Ethernet adapter ($40)

The following came from stock.

  • 4GB Maxell Onyx flash drive ($9)
  • 50GB OCZ Vertex 2 SSD ($126, much more when I bought it)
  • 500GB 7200RPM SATA drive ($120 today, much less when I bought it)

So to build the whole mess today, I’d pay about $1,118 plus tax and sometimes shipping.

What’s next, Rob?

Well, I’m going to be a bit limited by 4 2.5″ drive bays, although I will probably put some more drives in there. I have some 32GB SSDs that are gathering dust, and a couple of 500GB disks, so we’ll see how that goes. The Patriot Pyro SSDs are coming down in price (after rebate) at the local Fry’s store, so maybe I’ll make use of the SATA3 channels.

But for now, my next step is going to be a home NAS (that I threatened to do a while back) starting from an HP N40L Microserver. The Microserver, and its 8GB of DDR3 ECC RAM, came in last month. FreeNAS 8 is currently running on this system,  with an internal USB flash drive, although I’m tempted by OpenFiler’s ability to serve as a fibre channel target.

I will probably put the 8GB of RAM back into the mid-tower VMware box and use it as a second node, put some multiport cards into both ESXi servers, and power up a Summit 400-48T switch for the backbone of my virtualization network. I’m still watching for absurdly affordable PCIe 10GB Ethernet cards (since my Summit 400 has two 10GBE ports), but all I have for now is PCI-X, and only one of the three involved machines has even PCI.

I also now have a second location for lab equipment, as you may see in my write-up of the new store I’m starting. So the old desktop, and probably a Fibre Channel-enabled OpenFiler on a small SAN, will go over there. I can replicate across a 20ms latency link once, and have a pretty valid test environment for anything I’m likely to do.

Random thoughts

The LAN (RTL 8111E Gigabit Ethernet), and SATA onboard on the SH67H3 were supported out of the box, no oem.tgz tweaking needed. I had an Apricorn PCIe SSD/SATA3 controller that I plugged in with the SSD, but it wasn’t recognized by ESXi, so I went forward with the drive bay options.

I haven’t tried the SATA RAID on this system. I wouldn’t expect it to be supported, and I’d be inclined to use FreeNAS or OpenFiler or NexentaStor Community Edition to handle resilience, rather than the onboard RAID. If I get a chance, I’ll configure a pair of disks under the onboard RAID just to see how it works, or if it works. But it’s not a long-term prospect for this project.

Other people doing similar stuff

My friend Chris Wahl just put together his home whitebox systems. He went a bit more serverward, and he’s going with a pre-constructed NAS from Synology (which was tempting for me).

Kendrick Coleman wrote about his “Green Machines” project for his lab, and has built out a bit more (and put a bit more detail into his shipping list).

Simon Gallagher of vinf.net fame is well known in Europe for his vTARDIS projects, virtualizing virtualization inside virtual machines. Or as Nyssa said in Castrovalva, it’s when procedures fold back on themselves. I was reading about this, and doing a little bit of it on a quad core desktop at my last job, so I think he gets credit for my thinking about this scale of virtualzation in the first place.