Rabbit Reorganization: Building low power clusters from a rabb.it door

As you saw in my 3D Printing series, after years of pondering a 3D printer, I was finally inspired to buy one when a pile of clusters came up on eBay from the defunct rabb.it video streaming service.

In this series, I’ll take you through turning a rabbit door into some useful computing resources. You can do something similar even after the clusters are sold out; a lot of people have probably bought the clusters and ended up not using them, or you can adjust the plans here to other models.

The first thing I will put out there is that these are not latest-and-greatest state-of-the-art computers. If you’re looking for a production environment or DDR4 high density memory, keep looking. But if you want an inexpensive modular cluster that’s only about 5 years out of date, there’s hope for you in here.

The original cluster

eBay seller “tryc2” has sold several hundred of these “door clusters” from rabb.it, a now-defunct video streaming service that closed up shop in mid-2019. As of this writing, they still have a couple dozen available. I call it a “door cluster” as the 42 inch by 17 inch metal plate resembles a door, and gives you an idea of the ease of manipulating and fitting the environment into your home/homelab as it is delivered.

The cluster bundle will set you back US$300, plus tax where applicable. While they’re available, you can get one at this link and I’ll get a couple of dollars in commission toward my next purchase.

The cluster includes 10 Intel NUC quad-core boards (mine were NUC5PPYB quad-core Pentium; my friend Stephen Foskett got some that were newer NUC6CAYB Celeron boards which took more RAM). These boards feature one DDR3L SODIMM slot (max of 8GB), one SATA port with a non-standard power connector (more on this later), Gigabit Ethernet, HDMI out with a headless adapter (to fool the computer into activating the GPU despite no monitor being connected), four USB ports, and a tiny m.2 slot originally intended for wireless adapters.

In the center of the “door” are five NVIDIA Jetson TK1 boards. These were NVIDIA’s first low-end foray into GPU development, sold to let individuals try out machine learning and GPU computing. There are much newer units, including the Jetson Nano (whose 2GB version is coming this month), if you really want modern AI and GPU testing gear, but these are reasonably capable machines that will run Ubuntu 14 or 16 quite readily. You get 2GB of RAM and a 32GB onboard eMMC module, plus a SATA port and an SD slot as well as gigabit Ethernet.

The infrastructure for each cluster includes a quality Meanwell power supply, a distribution board assembly I haven’t unpacked yet, two automotive-style fuse blocks with power cords going to the 15 computers, and a 16 port Netgear unmanaged Gigabit Ethernet switch. With some modifications, you can run this entire cluster off one power cord and one network cord.

What’s missing?

So there is a catch to a $300 15-node cluster. The Jetson nodes are component complete, meaning they have RAM and storage. However, the NUCs are barebones, and you’ll need some form of storage and some RAM.

For the Jetson nodes, you’ll need an older Ubuntu machine and the NVIDIA Jetpack software loader. For the installation host, Ubuntu 14.04 is supported, 16.x should work, and later versions are at your own risk. You’ll also need an Ethernet connection to a network shared with your Ubuntu machine, as well as a MicroUSB connection between your Ubuntu host and the Jetson, to load the official software bundle.

For the NUCs, you’re looking at needing to add a SODIMM and some form of storage to each. I bought a bunch of 8GB SODIMMs on eBay ($28.50 each) to max out the boards. For storage, I tried USB flash drives and 16GB SD cards and had OS issues with both, so I bought the MicroSATACables NUC internal harness for each board, along with Toshiba Q Pro 128GB SATA III SSDs (these are sold out, but there’s a Samsung SM841N currently available in bulk for the same price, about $20 each).

If you do get a cluster bundle with the two-memory-slot NUC boards, you have two options beyond the above. The easy and documented option is to look for 4GB SODIMMs instead of 8GB; you may save a buck or two, or if you’re like me, you may have a box of 4GB SODIMMs from various upgrades and not have to buy anything. The other option is to update your BIOS on the NUC and try out 2x 8GB. For some uses, 16GB will be worth the cost (vSphere or other virtualization clusters). I’d suggest going with a known quantity to update the BIOS to the latest version, and then trying 2x 8GB.

One other thing you may need is a pack of spare fuses. I know they do their job, as I blew a few of them while plugging and unplugging the boards. But you may wish to have a few extras around. They’re the standard 3 amp “mini blade” fuse that can be found at auto parts stores (although my local shops tended to have one card of them, if that, on the shelf). You can also buy a 10 pack for $6.25 (Bussmann brand) , a 25 pack for about the same price (Baomain brand) or a 100 pack (Kodobo brand) for about $9.

Choose your own adventure

There are two paths to take once you have your gear collected and connected.

  1. How do we lay out the gear?
  2. What do we do with it?

I’ll look at my journey on both paths in upcoming episodes of this series. Spoiler: I’ve 3d-printed stacking plates for both the NUCs and the Jetsons, and am still working on how to mount the remaining pieces so I can e-waste the door piece. And as I write part 1, I still haven’t figured out what to do with the clusters.

Where do we go from here?

If you’ve bought one of these clusters (or more than one), feel free to chime in on the comment section and let me know what you’ve done with it. And stay tuned to this post (or @rsts11 on Twitter or Facebook) for updates on the next installments.

Money Pit: 3D Printing Part 1 – The Back Story, The Rationale, and The Assembly

This is one topic in a series of what I’m calling “money pit” projects. To be fair, it’ll be money and time pit topics, and nothing that you’d really have to get a second mortgage on your house to do… but things always get a bit out of hand.

This project is the 3D Printing project. The second part is available at First Round of Enhancements and part 3 should be out within a week. 

The Back Story

It all goes back to five or so years ago, when I bought a couple of Banana Pro single board computers from LeMaker in France.The Banana Pro was a Raspberry Pi-inspired board, but with gigabit Ethernet and external SATA on board. Great idea, but they didn’t sell as much as the RPi, so the accessory market was a lot lighter. I think there were 4 cases I found in the past 5 years, many of which were not readily available in the US.

I did order a few cases from China that had a section for the SATA drive, and stocked up on cables for the SATA drives. But I wasn’t too happy with what was out there.I found some of the 3D printer sites where people had built some cases, and thought “someday I’ll get a printer and make some cases.” I said that about every year for 4 years.

Then earlier this year, some more usable cluster kits came onto the used market from the now-defunct rabb.it startup. By “some,” I mean about a thousand of them. (Click on the photo below if you want to buy one of the kits yourself. It is an eBay partner network link but I have no association with the seller other than as a buyer of one cluster kit so far.)Single Board Computer Array with Intel NUC5PPYH and NVIDIA Jetson TK1They each contain ten NUC5PPYB quad-core pentium NUC machines and five NVIDIA Jetson TK1 dev boards. I pondered it for several months (not as long as the printer), finally bought one, and it showed up a week later. (I’ll write more about that project separately, and you can read my friend Stephen Foskett’s Pack Rat series about the rabb.it clusters here.)

About the same time, I broke down and bought a Creality Ender 3 Pro printer from my local geek shop, Central Computers. Central also stocks the Creality-branded filament for $20 per 1kg roll, and they’re about four miles from home. You can also buy directly from Creality, or choose some sellers on Amazon like SainsmartContinue reading

Cisco C22 M3 “Build” report: From Zero to vSphere in… two days?

Hi folks. The pile of project boxes in my home lab has gotten taller than I am, so when a Twitter follower asked me about running VMware vSphere on one of the systems not too far down in the stack, I took the challenge and said I’d try to get it going to see what I could report back.

Disclosure: While my day job is with Cisco, this computer was purchased out of my own pocket and used no proprietary/employee-only access to software or information. I do not provide end-user support for Cisco gear, nor do I recommend using used/aftermarket gear for production environments.

That system is a now-discontinued Cisco UCS C22 M3S. Yes, C22, not C220. It was an economy variant of the C220, more or less, with a lower cost and lower supported memory capacity as I recall. The one I have features a pair of Intel Xeon E5-2407 v2 processors (quad core 2.4GHz) and 48GB of RAM. The RAID controller is LSI’s 9220-8i, and for now I have a single 73GB hard drive installed because that’s what I found on my bench.

This is a standalone system, even though it’s sitting underneath a UCS 6296 Fabric Interconnect that’s freshly updated as well. I have the two on-board Gigabit Ethernet ports as well as a 4-port Gigabit Ethernet add-on card. And by way of disclosure, while I do work for Cisco and probably could have gotten a better box through work, I bought this one in a local auction out of my own pocket.

Warming up the system

The first thing I needed to do was make sure firmware, management controller, and so forth were up to date and usable. Cisco has long followed the industry standard in servers by making firmware and drivers freely available. I wrote about this back in 2014, when HPE decided to buck the standard, even before I worked for Cisco. You do have to register with a valid email address, but no service contract or warranty is required.

Since I was going to run this machine in standalone mode, I went to the Cisco support site and downloaded the Host Update Utility (HUU) in ISO form.

Updating firmware with the Host Update Utility (HUU) ISO

I loaded up Balena Etcher, a program used to write ISO images and other disk formats to USB flash drives. USB ports are easy to come by on modern computers, but optical drives are not as common. I “burned” the ISO to a flash drive and went to boot it up on the C22.

No luck. I got an error message on screen as the Host Update Utility loaded, referring to Error 906, “firmware copy failed.”

Doing some searching, I found that there were quirks to the bootability of the image. A colleague at Cisco had posted a script to the public community site in 2014, and updated it in 2017, which would resolve this issue. So I brought up my home office Linux box (ironically a HPE Microserver Gen8 that I wrote about in January), copied the script and the iso over, and burned the USB drive again with his script. This time it worked.

Recovering a corrupted BIOS flash image with recovery.cap

Alas, while four of the five firmware components upgraded, the BIOS upgrade was corrupted somehow. Probably my fault, but either way I had to resolve it before I could move forward.

Corrupt bios recovery, before and after

Seemed pretty obvious, and I figured the recovery.cap file would have been copied to the flash drive upon boot, but I figured wrong. You have to extract it from a squashfs archive inside the HUU ISO file. There’s even a ‘getfw’ program in the ISO to do this. Easy, right?

Of course not.

Turns out newer versions of OpenSSL won’t decrypt the filesystem image and extract the needed file, and even my year-out-of-date CentOS 7 box was too new. So I spun up a VM with the original CentOS 7 image and extracted there.

  1. Get the HUU for your system and UCS version (don’t use a C22 BIOS on a C240 or vice versa, for example).
  2. Mount or extract the ISO file
  3. Copy the GETFW/getfw binary out
  4. Unmount the ISO file
  5. ./getfw -b -s <HUU ISO FILE> -d .

This will drop a “bios.cap” file in the current directory. Rename it to “recovery.cap” … put it on a flash drive (plain DOS formatted one is fine), put it into the system, and reset your machine. You’ll go from the first screen with “Could not find a recovery.cap file..” to the second screen transferring to controller. And in a few minutes, your system should be recovered.

Preparing to boot the system

This is the easiest part, in most cases,  but there are a couple of things you may have to modify in the Integrated Management Controller (IMC) and the LSI WebBIOS interface.

Set your boot order. I usually go USB first (so I don’t have to catch the F6 prompt) followed by the PCIe RAID card. The RAID card will only show up if supported and bootable drives are installed though. This can be changed on the fly if you like, but I prefer to do it up front.

Check your RAID controller settings. Follow the BIOS screen instruction for going into WebBIOS (the text interface to configuring the RAID card), and make sure that you have disks presented in virtual drives. I had plugged a UCS drive and a random SSD in and only the UCS drive (a 73GB SAS drive) showed up. It did not appear to the F6 Boot Order menu though, as it was not set bootable in WebBIOS. A few key taps fixed this, and the drive appeared. Again, you can change the boot order after installing, but why not do it first?

Moving forward with VMware installation

This is the easy part, more or less. I went to VMware’s site and grabbed the Cisco custom ISO (which should have current drivers and configurations for Cisco components, especially the RAID controller and network cards). You can also install with the standard vSphere installer if you like.

I burned the 344 MB ISO to a flash drive, finding again that Etcher didn’t like it (complaining not being a bootable ISO) but Rufus did. With a Rufus-burned 8GB drive (choose “yes” to replace menu.c32 by the way), I was able to install the vSphere system and bring it up.

On first install attempt, I did see this message for about a second, and had no drives show up.

Turns out this error warns you that log files are not stored permanently when booting from a USB installation drive, and it was unrelated to the missing drives (which didn’t show up because I originally had an unconfigured SSD and no configured drives installed–see previous section to resolve this).

But when I had the hard drive configured, the install went smoothly.

It is somewhat funny that I’m working with 48GB of RAM and only 60ish GB of storage at the moment, but from here I was able to copy over my OS installation ISOs (8GB over powerline networking made it an overnight job) and bring up my first VM on the new system.

So where do we go from here?

For now, the initial goal of confirming that vSphere will install neatly on a C22 M3 with the 9220-8i RAID controller has been accomplished.

Next up, adding some more storage (maybe SSD if I can find something that will work), maybe bumping the RAM up a bit, and doing something useful with the box. It only draws 80-100 watts under light use, so I’m okay with it being a 24/7 machine, and it’s quiet and in the garage so it shouldn’t scare the neighbors.

If you’re looking to turn up an older Cisco UCS server in your home lab, get familiar with the software center on Cisco.com, as well as the Cisco Community site. Lots of useful information out there as well as on the Reddit /r/homelab site.

Have you rescued old UCS servers for your homelab? Share your thoughts or questions below, or join the conversation on Facebook and Twitter.

 

Straying into Ubiquiti territory for a home network experiment, part 1

As many of you know, I run my home, lab, and store networks primarily on Meraki gear. Employee discounts and internal system engineer promos make it a reasonably priced platform for me, but I can understand why non-Cisco employees might not build out a substantial home network on their own dime with Meraki.

Having cut directly over from the Linksys WRT1900AC as a router to a mix of MX security appliances, MS switches, and MR access points, I didn’t really take the time to evaluate other options. However, with many friends getting into Ubiquiti, I figured it was worth trying that platform out, especially when some of the devices went on sale at a local computer store.

In this post I’ll talk about the initial deployment and the gear I’ve purchased. I do have a few items from Ubiquiti that I won’t be using for this environment (like the EdgeRouters and a couple of relatively ancient 24v POE access points).

Spoiler: I’m still a big Meraki fan, and if I were deploying in a business environment where I didn’t want to tweak much or where I wanted enterprise-grade features, I’d still lean toward that platform. However, for a home network, home office, or early stage  startup, the Ubiquiti option is definitely worth a look.

Initial Bill of Materials

ubnt-cloudkey-aa-1.jpg

UC-CK Cloud Key, with two AA batteries for scale

Note that Amazon offers some combos with multiple elements, like this $349 combo with Cloud Key, Switch, and Security Gateway. You may be able to get quicker shipping and/or save a buck or two that way, but look around at the combos to see what makes the most sense. If you decide to buy multiples, there may be discounted packs of devices (like this 5-pack of AP-AC-PRO which saves you about $15 per device).

You’ll also find the items on Newegg, including Newegg on eBay, Central Computers (if you’re in the SF Bay Area), and direct from Ubiquiti. If you use the Amazon or eBay links above, we get a few bucks that will go back into gear to review here and on rsts11travel.

Why did I choose this particular gear?

ubnt cloudkey

UniFi Cloud Key

Like Meraki, Ubiquiti uses the concept of a “cloud controller.” Unlike Meraki, you can place the controller on your own private cloud, or purchase a “Cloud Key” to run on your own network for management. There is still a “public” website to view and manage the network, but you can access the local controller via ssh, https, or a mobile app.

Since I don’t currently have a full-time system running that would host the controller, I chose to buy the older Cloud Key. They have newer versions, with more powerful controller hardware, battery  backup, and more features, but since this is meant to be a basic deployment on a budget (and I wanted to pick up the cloud key locally), I went with the first gen device. This device is about the size of four AA  batteries; can be powered by PoE or a USB cable; and of course still requires a LAN connection even if powered by USB.

ubnt accesspoint

UniFi AC Pro

For wireless access, there are over a dozen different AP models, compared and contrasted on the Ubiquiti knowledgebase. The three devices in the “wave 1” family (UniFi AC) include the Lite, the LR (long range), and the Pro. My decision on the Pro was based primarily on “ooh, it’s on sale” but I’m pretty comfortable with the features including extended 5GHz radio rate of 1300 Mbps, and the dual Ethernet ports for redundancy.

ubnt switch

UniFi Switch 8 60W

The switch is meant to let me offload both the AP and the Cloud Key from their current home on my Meraki MS42P switch, so that I can put them behind the security gateway for more thorough testing. The AP uses 9 watts and the Cloud Key uses 5 watts, so the 60 watt PoE switch should be enough for the near term.  There is a 150 watt version (US-8-150W, for about $190) with two additional SFP modules, if you do need more power. And interestingly, the switch is the only piece in the bill of materials that has a metal shell as opposed to plastic.

ubnt security gateway

Unifi Security Gateway 3-port

Finally, with the USG security gateway, I get additional visibility into the Internet connection itself and my use thereof. Without the USG in the data path, I can see per-device information within my network, and status of the APs and switches, but I don’t have the visibility at a network level.

Starting the deployment

I bought the access point first, and went back a day or two later for the cloud key once I decided not to run the controller on my own hardware. So the CK went up first, plugged in via the tiny Ethernet cable to a port on my Meraki PoE switch.

When I logged in, of course, it was behind a few versions on the firmware. I had issues with firmware updates and “adopting” the device into my Ubiquiti cloud portal. The adoption failed claiming the device was unreachable, and the firmware upgrade didn’t seem to start, much less complete.

So I ended up doing some minor workarounds using some steps from a community post here for the firmware update. I wish I could remember the fix for the adoption, although I suspect I’ll figure it out again on a future device and can report back then.

Once the Cloud Key was recognized, updated, and working properly, I adopted the Access Point and updated it. I configured a wireless network and went downstairs from the home office to connect my iPad to the new network and test it out.

Not surprisingly, the network was as fast and efficient as it was through the MR34 at the same distance. I did learn from the Ubiquiti interface that there were at least 50 networks detected by the AP-AC-PRO, which was slightly surprising. Despite that, I’m seeing about 20% utilization on 2.4GHz and 3% utilization on 5GHz and noticeable but not overwhelming “interference” registering primarily on 2.4GHz.

I also realized that the extra MR34 downstairs, connected through an MS220-8P switch that was uplinked through Powerline networking, was definitely throttling my connectivity when I associated with it. Unplugging the AP forced my iPad to connect to the upstairs MR34, and I didn’t have any issues even at the distance. So for now, the Powerline network is driving two tiny Verium miners and my two printers, as well as an Intel NUC in the living room.

What comes next?

After reorganizing a bit of the home office, I’ll be turning up the USG security gateway and the 8-port switch very soon. At that point I’m likely to put all four pieces behind my secondary Internet connection (to enable the home network SLA to be maintained), and run some traffic through it.

I’m also giving serious thought to powering the USG through a PoE splitter like the Wifi Texas one ($18 on Amazon) so that all four devices can be powered from a single wall outlet (for the switch).

Check in soon for the second part of this journey, and feel free to share any suggestions, comments, references, designs, etc in the comments below.

 

 

Upgrading the HPE Microserver Gen 8 and putting it into service

A year and a half after my original write-up of the Ivy Bridge-based Gen8 Microserver, I’m finally doing a last round of pre-launch updates and documenting the upgrades I made.

You can read the original write-up (as updated to December 2018) here: Warming up the HP Microserver Gen8 and PS1810-8G switch

More links at the end of this post. Pricing has been updated as of 2019-08-15, but is still subject to change without notice.

Where do we start?

The HPE Microserver Gen8 as I received it had the Intel Pentium G2020T processor, a dual core, dual thread, 2.5 GHz processor with integrated Intel HD Graphics. For an ultra-low-end workgroup or SOHO server, that’s not too bad, and it’s better than the Celeron G1610T option.

gen8-cpus

Stock processor options for the HP Microserver Gen8

But since we’re not worried about the warranty and do want a bit more power, we looked at the following options for a CPU upgrade.

Xeon Processor CPU speed C/T TDP Integrated graphics? eBay price/link
August 2019 (December 2018)
E3-1230 v2 3.30 – 3.70 4/8 69 No 49.00 (was 75.00)
E3-1260L (v1) 2.40 – 3.30 4/8 45 HD2000 34.30 (was 57.00)
E3-1265L v2 2.50 – 3.50 4/8 45 HD2500 99.00 (was 100.00)

Since we didn’t have a use case in mind for this, we went for the E3-1265L v2 processor. CPU speed is reasonable, power is within the envelope for this system’s cooling capacity, and the price didn’t turn out too bad (although it was almost twice as much a year and a half ago).

The system arrived with 16GB of memory, which is the maximum supported with this generation of processor and a two-DIMM-slot motherboard (the CPU will handle 32GB but no more than 8GB per DIMM, and the Memphis Electronics 16GB DDR3 DIMMs require a newer generation of CPU).

The system also shipped with a single 500GB SATA drive and three empty trays for expansion, connected to the onboard B120i storage controller. There’s a low profile slot at the top suitable for an optical drive, or a hard drive carrier. According to the specs, the first two bays are 6gbit SATA and the last two bays are 3gbit SATA. You can add a P222 Smart Controller to provide battery-backed cache and expanded RAID options; these can be had for as low as $25 on eBay.

I installed a 32GB Micro-SD card for OS boot. Like the previous Microservers, the Gen8 offers an internal USB port, but Gen8 adds a MicroSD slot which may be less likely to snap off during maintenance. If I were running a heavy duty Windows or Linux server on this machine, I’d probably either put an SSD on a PCIe carrier card or use the optical drive SATA connector on the board to mount a boot drive in the optical bay. But for VMware or appliance-type platforms, or for light use Linux, the MicroSD should be enough.

Bringing the Microserver Gen8 up to date

One of the first things I do when building or populating a system is to upgrade any applicable firmware on the system. This could include the lights-out management, the system BIOS itself, drive controllers, optical drives, etc.

This gets complicated with HPE gear, as they decided to restrict all but “critical” BIOS update to customers with active support contracts or warranties. There are dubious workarounds, but it’s more of a pain than for any other mainstream vendor. Luckily (and I say that sadly), some of the critical vulnerabilities around Intel microcode in the past year led to the most recent Microserver Gen8 BIOS being considered critical.

So I gathered the latest BIOS, the ILO 4 firmware for out-of-band management, and the latest firmware for the PS1810-8G switch that this system will be connected to. (Unlike the computer systems, HPE’s networking gear carries a lifetime limited warranty and free access to firmware updates.)

With the switch connected to our upstream POE switch and the Microserver’s three network ports (two gigabit LAN, one ILO) connected to the switch, I upgraded the firmware on all three components and installed CentOS 7 from the latest ISO image via external USB flash drive. Additionally, I got a free 60-day trial license for ILO 4 Advanced from HPE.

One quirk I ran into was with regard to the .NET-based remote console and Chrome browser. In short, it doesn’t work unless you install a plugin to handle the .NET launching. I didn’t want to bother with Java either, so I accessed ILO from Microsoft Edge and used the .NET option from there.

Where do we go from here?

In the near term, I’m planning to install the Aquantia AQN-107 10GBase-T/NBase-T adapter and use it to test a couple of new devices in the home lab. Linux with iPerf or the like should be a good endpoint, and with a Thunderbolt 3-to-NBase-T adapter and an economical NBase-T/10G switch to work with, it should be compact and functional.

Longer term, with the former VMware “$25 server” being converted to EdgeLinux (from the makers of the Antsle servers we wrote about here and here), I will probably have this box serve as my in-home vSphere / ESXi system.

There’s a very small chance that I’ll break down and get the new Gen10 machine, but with as many spare computers as I have in the home lab now, it’s not a high priority.

What have you done with your Microserver recently? Share in the comments, or join the conversation on Facebook or Twitter.

For more information on the Microserver Gen 8 (especially around expandability):

HomeServerShow.com has an exhaustive page on Gen8 upgrades and other features and functions.

ServeTheHome has their release-time update on the Gen8 system here: HP ProLiant Microserver Gen8 Updated Specs and Pricing

And if you want the latest and greatest, the Microserver Gen10 came out a year ago with AMD Opteron X3000 processors.