Cisco C22 M3 “Build” report: From Zero to vSphere in… two days?

Hi folks. The pile of project boxes in my home lab has gotten taller than I am, so when a Twitter follower asked me about running VMware vSphere on one of the systems not too far down in the stack, I took the challenge and said I’d try to get it going to see what I could report back.

Disclosure: While my day job is with Cisco, this computer was purchased out of my own pocket and used no proprietary/employee-only access to software or information. I do not provide end-user support for Cisco gear, nor do I recommend using used/aftermarket gear for production environments.

That system is a now-discontinued Cisco UCS C22 M3S. Yes, C22, not C220. It was an economy variant of the C220, more or less, with a lower cost and lower supported memory capacity as I recall. The one I have features a pair of Intel Xeon E5-2407 v2 processors (quad core 2.4GHz) and 48GB of RAM. The RAID controller is LSI’s 9220-8i, and for now I have a single 73GB hard drive installed because that’s what I found on my bench.

This is a standalone system, even though it’s sitting underneath a UCS 6296 Fabric Interconnect that’s freshly updated as well. I have the two on-board Gigabit Ethernet ports as well as a 4-port Gigabit Ethernet add-on card. And by way of disclosure, while I do work for Cisco and probably could have gotten a better box through work, I bought this one in a local auction out of my own pocket.

Warming up the system

The first thing I needed to do was make sure firmware, management controller, and so forth were up to date and usable. Cisco has long followed the industry standard in servers by making firmware and drivers freely available. I wrote about this back in 2014, when HPE decided to buck the standard, even before I worked for Cisco. You do have to register with a valid email address, but no service contract or warranty is required.

Since I was going to run this machine in standalone mode, I went to the Cisco support site and downloaded the Host Update Utility (HUU) in ISO form.

Updating firmware with the Host Update Utility (HUU) ISO

I loaded up Balena Etcher, a program used to write ISO images and other disk formats to USB flash drives. USB ports are easy to come by on modern computers, but optical drives are not as common. I “burned” the ISO to a flash drive and went to boot it up on the C22.

No luck. I got an error message on screen as the Host Update Utility loaded, referring to Error 906, “firmware copy failed.”

Doing some searching, I found that there were quirks to the bootability of the image. A colleague at Cisco had posted a script to the public community site in 2014, and updated it in 2017, which would resolve this issue. So I brought up my home office Linux box (ironically a HPE Microserver Gen8 that I wrote about in January), copied the script and the iso over, and burned the USB drive again with his script. This time it worked.

Recovering a corrupted BIOS flash image with recovery.cap

Alas, while four of the five firmware components upgraded, the BIOS upgrade was corrupted somehow. Probably my fault, but either way I had to resolve it before I could move forward.

Corrupt bios recovery, before and after

Seemed pretty obvious, and I figured the recovery.cap file would have been copied to the flash drive upon boot, but I figured wrong. You have to extract it from a squashfs archive inside the HUU ISO file. There’s even a ‘getfw’ program in the ISO to do this. Easy, right?

Of course not.

Turns out newer versions of OpenSSL won’t decrypt the filesystem image and extract the needed file, and even my year-out-of-date CentOS 7 box was too new. So I spun up a VM with the original CentOS 7 image and extracted there.

  1. Get the HUU for your system and UCS version (don’t use a C22 BIOS on a C240 or vice versa, for example).
  2. Mount or extract the ISO file
  3. Copy the GETFW/getfw binary out
  4. Unmount the ISO file
  5. ./getfw -b -s <HUU ISO FILE> -d .

This will drop a “bios.cap” file in the current directory. Rename it to “recovery.cap” … put it on a flash drive (plain DOS formatted one is fine), put it into the system, and reset your machine. You’ll go from the first screen with “Could not find a recovery.cap file..” to the second screen transferring to controller. And in a few minutes, your system should be recovered.

Preparing to boot the system

This is the easiest part, in most cases,  but there are a couple of things you may have to modify in the Integrated Management Controller (IMC) and the LSI WebBIOS interface.

Set your boot order. I usually go USB first (so I don’t have to catch the F6 prompt) followed by the PCIe RAID card. The RAID card will only show up if supported and bootable drives are installed though. This can be changed on the fly if you like, but I prefer to do it up front.

Check your RAID controller settings. Follow the BIOS screen instruction for going into WebBIOS (the text interface to configuring the RAID card), and make sure that you have disks presented in virtual drives. I had plugged a UCS drive and a random SSD in and only the UCS drive (a 73GB SAS drive) showed up. It did not appear to the F6 Boot Order menu though, as it was not set bootable in WebBIOS. A few key taps fixed this, and the drive appeared. Again, you can change the boot order after installing, but why not do it first?

Moving forward with VMware installation

This is the easy part, more or less. I went to VMware’s site and grabbed the Cisco custom ISO (which should have current drivers and configurations for Cisco components, especially the RAID controller and network cards). You can also install with the standard vSphere installer if you like.

I burned the 344 MB ISO to a flash drive, finding again that Etcher didn’t like it (complaining not being a bootable ISO) but Rufus did. With a Rufus-burned 8GB drive (choose “yes” to replace menu.c32 by the way), I was able to install the vSphere system and bring it up.

On first install attempt, I did see this message for about a second, and had no drives show up.

Turns out this error warns you that log files are not stored permanently when booting from a USB installation drive, and it was unrelated to the missing drives (which didn’t show up because I originally had an unconfigured SSD and no configured drives installed–see previous section to resolve this).

But when I had the hard drive configured, the install went smoothly.

It is somewhat funny that I’m working with 48GB of RAM and only 60ish GB of storage at the moment, but from here I was able to copy over my OS installation ISOs (8GB over powerline networking made it an overnight job) and bring up my first VM on the new system.

So where do we go from here?

For now, the initial goal of confirming that vSphere will install neatly on a C22 M3 with the 9220-8i RAID controller has been accomplished.

Next up, adding some more storage (maybe SSD if I can find something that will work), maybe bumping the RAM up a bit, and doing something useful with the box. It only draws 80-100 watts under light use, so I’m okay with it being a 24/7 machine, and it’s quiet and in the garage so it shouldn’t scare the neighbors.

If you’re looking to turn up an older Cisco UCS server in your home lab, get familiar with the software center on Cisco.com, as well as the Cisco Community site. Lots of useful information out there as well as on the Reddit /r/homelab site.

Have you rescued old UCS servers for your homelab? Share your thoughts or questions below, or join the conversation on Facebook and Twitter.

 

Upgrading the HPE Microserver Gen 8 and putting it into service

A year and a half after my original write-up of the Ivy Bridge-based Gen8 Microserver, I’m finally doing a last round of pre-launch updates and documenting the upgrades I made.

You can read the original write-up (as updated to December 2018) here: Warming up the HP Microserver Gen8 and PS1810-8G switch

More links at the end of this post. Pricing has been updated as of 2019-08-15, but is still subject to change without notice.

Where do we start?

The HPE Microserver Gen8 as I received it had the Intel Pentium G2020T processor, a dual core, dual thread, 2.5 GHz processor with integrated Intel HD Graphics. For an ultra-low-end workgroup or SOHO server, that’s not too bad, and it’s better than the Celeron G1610T option.

gen8-cpus

Stock processor options for the HP Microserver Gen8

But since we’re not worried about the warranty and do want a bit more power, we looked at the following options for a CPU upgrade.

Xeon Processor CPU speed C/T TDP Integrated graphics? eBay price/link
August 2019 (December 2018)
E3-1230 v2 3.30 – 3.70 4/8 69 No 49.00 (was 75.00)
E3-1260L (v1) 2.40 – 3.30 4/8 45 HD2000 34.30 (was 57.00)
E3-1265L v2 2.50 – 3.50 4/8 45 HD2500 99.00 (was 100.00)

Since we didn’t have a use case in mind for this, we went for the E3-1265L v2 processor. CPU speed is reasonable, power is within the envelope for this system’s cooling capacity, and the price didn’t turn out too bad (although it was almost twice as much a year and a half ago).

The system arrived with 16GB of memory, which is the maximum supported with this generation of processor and a two-DIMM-slot motherboard (the CPU will handle 32GB but no more than 8GB per DIMM, and the Memphis Electronics 16GB DDR3 DIMMs require a newer generation of CPU).

The system also shipped with a single 500GB SATA drive and three empty trays for expansion, connected to the onboard B120i storage controller. There’s a low profile slot at the top suitable for an optical drive, or a hard drive carrier. According to the specs, the first two bays are 6gbit SATA and the last two bays are 3gbit SATA. You can add a P222 Smart Controller to provide battery-backed cache and expanded RAID options; these can be had for as low as $25 on eBay.

I installed a 32GB Micro-SD card for OS boot. Like the previous Microservers, the Gen8 offers an internal USB port, but Gen8 adds a MicroSD slot which may be less likely to snap off during maintenance. If I were running a heavy duty Windows or Linux server on this machine, I’d probably either put an SSD on a PCIe carrier card or use the optical drive SATA connector on the board to mount a boot drive in the optical bay. But for VMware or appliance-type platforms, or for light use Linux, the MicroSD should be enough.

Bringing the Microserver Gen8 up to date

One of the first things I do when building or populating a system is to upgrade any applicable firmware on the system. This could include the lights-out management, the system BIOS itself, drive controllers, optical drives, etc.

This gets complicated with HPE gear, as they decided to restrict all but “critical” BIOS update to customers with active support contracts or warranties. There are dubious workarounds, but it’s more of a pain than for any other mainstream vendor. Luckily (and I say that sadly), some of the critical vulnerabilities around Intel microcode in the past year led to the most recent Microserver Gen8 BIOS being considered critical.

So I gathered the latest BIOS, the ILO 4 firmware for out-of-band management, and the latest firmware for the PS1810-8G switch that this system will be connected to. (Unlike the computer systems, HPE’s networking gear carries a lifetime limited warranty and free access to firmware updates.)

With the switch connected to our upstream POE switch and the Microserver’s three network ports (two gigabit LAN, one ILO) connected to the switch, I upgraded the firmware on all three components and installed CentOS 7 from the latest ISO image via external USB flash drive. Additionally, I got a free 60-day trial license for ILO 4 Advanced from HPE.

One quirk I ran into was with regard to the .NET-based remote console and Chrome browser. In short, it doesn’t work unless you install a plugin to handle the .NET launching. I didn’t want to bother with Java either, so I accessed ILO from Microsoft Edge and used the .NET option from there.

Where do we go from here?

In the near term, I’m planning to install the Aquantia AQN-107 10GBase-T/NBase-T adapter and use it to test a couple of new devices in the home lab. Linux with iPerf or the like should be a good endpoint, and with a Thunderbolt 3-to-NBase-T adapter and an economical NBase-T/10G switch to work with, it should be compact and functional.

Longer term, with the former VMware “$25 server” being converted to EdgeLinux (from the makers of the Antsle servers we wrote about here and here), I will probably have this box serve as my in-home vSphere / ESXi system.

There’s a very small chance that I’ll break down and get the new Gen10 machine, but with as many spare computers as I have in the home lab now, it’s not a high priority.

What have you done with your Microserver recently? Share in the comments, or join the conversation on Facebook or Twitter.

For more information on the Microserver Gen 8 (especially around expandability):

HomeServerShow.com has an exhaustive page on Gen8 upgrades and other features and functions.

ServeTheHome has their release-time update on the Gen8 system here: HP ProLiant Microserver Gen8 Updated Specs and Pricing

And if you want the latest and greatest, the Microserver Gen10 came out a year ago with AMD Opteron X3000 processors.

Looking ahead into 2019 with rsts11

This is becoming somewhat of a tradition… I’ll point you toward a Tom Hollingsworth post and then figure out what I want to look back on a year from now. As long as Tom’s okay with that, I am too.

This year, Tom’s New Year’s post is about content. He seems to think 2019 is the King of Content. I’m not really sure what that means, but seeing as my blogs seem to be alternately seasonal (with most rsts11 content in the winter/spring and rsts11travel in the summer/fall), I’m hoping to get a more balanced content load out there for you this year on both blogs.

You can see the new year’s post for rsts11travel, my travel-themed blog, over on rsts11travel of course.

Looking back on 2018

Looking back on rsts11 for 2018, our top-viewed posts were a bit surprising to me.

Continue reading

One Size Fits All: Hyper-V on VMware turf, custard trucks, and IT evangelism

At VMworld 2013 in San Francisco, there was a lot of buzz around Hyper-V, oddly enough. A few vendors mentioned multi-hypervisor heterogeneous cloud technologies in hushed tones, more than a few attendees bemoaned the very recent death of Microsoft TechNet Subscription offerings, and guess who showed up with a frozen custard truck?

8015.Custard_picks

Yep, Microsoft’s server team showed up, rented out and re-skinned a Frozen Kuhsterd food truck, and handed out free frozen custard for a chance to promote and discuss their own virtualization platform and new publicity initiative, branded Virtualization2.

The frozen custard was pretty tasty. Well worth the 3 block walk from Moscone. It was a pretty effective way to get attention and mindshare as well–several people I spoke with were impressed with the marketing novelty and the reminder that VMware isn’t the only player in the game, even if one friend considered it an utter failure due to the insufficient description of frozen custard.

Almost two years ago when I did my Virtualization Field Day experience, the question I asked (and vendors were usually prepared to answer) was “when will you support Xen in addition to VMware?” This year, it’s more “when will you support Hyper-V?” So a lot of people are taking Microsoft seriously in the visualization market these days.

Insert Foot, Pull Trigger

One nominal advantage Microsoft has had over VMware in the last few years is an affordable way for IT professionals to evaluate their offerings for more than two months at a time. But first, some history.

time-bomb-meme

Once upon a time, VMware had a program called the VMTN (VMware Technology Network) Subscription. For about $300 a year, you got extended use licenses for VMware’s products, for non-production use. No 60-day time bomb, no 6-reinstalls-a-year for the home lab, and you can focus on learning and maybe even mastering the technology.

Well, in February 2007, VMware did away with the VMTN subscription. You can still see the promo/signup page on their site but you’re not going to be able to sign up for it today.

At that point, Microsoft had the advantage in that their TechNet Subscription program gave you a similar option. For about $300/year you could get non-production licenses for most Microsoft products, including servers and virtualization. I would believe that a few people found it easier to test and develop their skills in that environment, rather than in the “oops, it’s an odd month, better reinstall the lab from scratch” environment that VMware provided.

Well, as of today, September 1, the TechNet Subscription is no more. If you signed up or renewed by the end of August 31, you get one more year and then your licenses are no longer valid. If you wanted some fresh lab license love today, you’re out of luck.

Technically, you can get an MSDN subscription for several thousand dollars and have the same level of access. The Operating Systems level is “only” $699 (want other servers? You’re looking at $1199 to $6119). Or if you qualify for the Microsoft Partner Program as an IT solutions provider, you can use the Action Pack Solution Provider to get access to whatever is current in the Microsoft portfolio for about $300/year. But the latter is tricky in that you need to be a solutions provider and jump through hoops, and the former is tricky because you might not have several thousand dollars to send to Redmond every year.

Help me, Obi-Wan vExpert, you’re my only hope

In 2011, Mike Laverick started a campaign to reinstate the VMTN subscription program. The thread on the VMware communities forum is occasionally active even two years later. But after two years of increasing community demand and non-existent corporate support, a light appeared at the end of the tunnel last week at VMworld in San Francisco.

As Chris Wahl reported, Raghu Raghuram, VMware Executive Vice President of Cloud Infrastructure and Management, said the chances of a subscription program returning are “very high.” Chris notes that there’s not much detail beyond this glimmer of hope, but it’s more hope than we’ve had for most of the last 6 years. For those of you who remember Doctor Who between 1989 and 2005, yeah, it’s like that.

Today, your choices for a sustainable lab environment include being chosen as a vExpert (or possibly a Microsoft MVP–not as familiar with that program’s somatic components) with the ensuing NFR/eval licenses; working for a company that can get you non-expiring licenses; unseemly licensing workaround methods we won’t go into; or simply not having a sustainable lab environment.

I added my voice to the VMTN campaign quite a while ago. When nothing came of that campaign, and I found myself more engaged in the community, I applied for (and was chosen for) vExpert status. So the lab fulcrum in my environment definitely tilts toward the folks in Palo Alto, not Redmond.

But I did mention to the nice young lady handing out tee shirts at the Microsoft Custard Truck that I’d be far more likely to develop my Hyper-V skills if something like TechNet subscription came back. She noted this on her feedback notebook, so I feel I’ve done my part. And I did get a very comfy tee shirt from her.

When I got back to my hotel, I found that the XL shirt I’d asked for was actually a L. Had I not been eating lightly and walking way too much, it wouldn’t have come anywhere near fitting, and it probably won’t any more, now that I’m back to normal patterns. But maybe that size swap was an analogy for a bigger story.

One size doesn’t fit all.

If Microsoft and VMware can’t make something happen to help the new crop of IT professionals cut their teeth on those products, they’ll find the new technologists working with other products. KVM is picking up speed in the market, Xenserver is moving faster toward the free market (and now offers a $199 annual license if you want those benefits beyond the free version), and people who aren’t already entrenched in the big two aren’t likely to want to rebuild their lab every two months.

And when you layer Openstack or Cloudstack (yeah it’s still around) on top of the hypervisor, it becomes a commodity. So the benefits of vCenter Server or the like become minimal to non-existent.

So where do we go from here?

Best case, VMware comes up with a subscription program, and Microsoft comes up with something as well. Then you can compare them on even footing and go with what works for you and your career.

Worst case, try to live with the vCenter and related products’ 60 day trial. If your company is a VMware (or Microsoft) virtualization customer, see if your sales team can help, or at least take the feedback that you want to be able to work in a lab setting and spend more time testing than reinstalling. 

And along the way, check out the other virtualization players (and the alternatives to VMware and Microsoft management platforms… even Xtravirt’s vPi for Raspberry Pi). Wouldn’t hurt to get involved in the respective communities, follow some interesting folks on Twitter and Google+, and hope for the best.

Did you say something about Doctor Who up there?

Yeah, and I should share something else with you.

When I saw the mention of the custard truck, my first thought was honestly not frozen concoctions in general. Obviously, it was the first Matt Smith story on Doctor Who, Eleventh Hour, wherein he tries to find some food to eat at Amy Pond’s home after regenerating. He ends up going with fish fingers (fish sticks) and custard (not the frozen kind).

So I made a comment on Twitter, not directed at anyone, saying “I’d have more respect for Microsoft’s Hyper-V Custard if fish fingers were offered on the side.”

And this really happened.

this-really-happened

So even if they’re discouraging me and other technologists from effectively labbing their products, I have to give them credit for a sense of humor. Not usually what you expect to come out of Redmond, now is it?

Related Links:

Mr Jones posted an article that really annoyed me until I read his well-reasoned response to the well-reasoned comments. Check out his interpretation of the TechNet subscription and brave the comments for some very sane discussions.

A couple of pieces from the Microsoft team about their marketing activity. Fun read, and the source of the truck photos above.

tardis.wikia.com definitions and a BBC video clip from Youtube,to help you understand the Twitter exchange.

NUC NUC (Who’s there?) VMware lab…

nuc-outside

VMware lab who? VMware lab in pocket-size format!

So in our last installment, I found out that I can upgrade my Shuttle SH67H ESXi servers to support Ivy Bridge processors. If you want to read more about that, feel free to visit my Compact VMware Server At Home post from Winter 2012, and my Upgrading my home VMware lab with Ivy Bridge post from Spring 2013.

The replacement boards came in from Shuttle, and they’ll be going back into the chassis. But as you may have seen at the end of the last post, I discovered the Intel Next Unit of Computing server line. The NUC line current includes three models.

  • DC3217IYE – i3-3217U processor at 1.8 GHZ dual core with 3MB cache), dual HDMI, Gigabit Ethernet at $293 (pictured)
  • DC3217BY – i3-3217U processor, single HDMI, single Thunderbolt,  – no native Ethernet – at $323
  • DCCP847DYE– Celeron 847 (1.1 GHZ dual core with 2MB L3 cache, dual HDMI, Gigabit Ethernet at $172
    (Prices are estimated list from Intel’s site–probably cheaper by a buck or ten at Amazon, Fry’s, Central Computer, or your favorite retailer. Feel free to use my links and help me buy the next one. 🙂 )

nuc-inside

All three have three USB 2.0 ports outside (one front, two rear), as well as two USB headers inside, conceivably useful for a USB flash drive or flash reader. They also have an mSATA-capable Mini-PCIe slot as well as a short mini-PCIe slot suitable for a WiFi/Bluetooth card. And there are two DDR3 SODIMM slots, supporting a reported 16GB of RAM (the processor supports 32GB, but the board/kit do not mention this). They all include VT-x with EPT.

I don’t see the Celeron being that useful for virtualization labs, but these are rather multifunctional for a little 4″ square computer. Imagine putting a broadband modem (3G/4G/Wimax) inside for reasonably potent portable kiosk purposes (VESA mount kit included). A card reader and a DVD burner for saving and sharing (and even editing) photos. Intel’s WiDi wireless display technology is supported as well, if you have a suitable receiver. Or use it with a portable projector for presentations on the go (no more fiddling with display adapters for presentations at your meetings!).

But we’re talking about a VMware lab here.

And let me get this out of the way… this was one of the coolest features of the NUC.


That’s correct, the box has its own sound effects.

Let’s get this party started…

Those of you of my era and inclinations may remember when KITT’s brain was removed and placed in a non-vehicle form factor on the original Knight Rider tv series. When I got ready to RMA my Shuttle motherboards, I was thinking about this sort of effect for a couple of VMs on the in-service ESXi server that was about to have its board sent to southern California. And that’s what I had to do. I couldn’t quite miniaturize the server Orac-style, but  that thought had crossed my mind as well.

So I picked up the DC327IYE unit at Fry’s, got an mSATA 64GB card (Crucial m4 64GB CT064M4SSD3) and a spare low profile USB flash drive (Patriot Autobahn 8GB (PSF8GLSABUSB)) at Central Computers, and took a Corsair 16GB DDR3 Kit (CMSO16GX3M2A1333C9) from my stock. Assembling it took only a few minutes and a jeweler’s screwdriver, and then I was ready to implement ESXi.

I backed up the VMs from the original system using vSphere Client, so that I could re-deploy them later to the NUC. Someday I’ll get Veeam or something better going to actively back up and replicate my VMs, but for the limited persistent use of my cluster (cacti and mediawiki VMs) this was sufficient.

One gotcha: Fixing the NUC network…

I originally tried reusing the 4GB usb drive my existing server was booting from, but it didn’t recognize the Ethernet interface. I installed a fresh 5.0u2 on a new flash drive, and still no luck. I found a post at tekhead.org that detailed slipstreaming the new driver into ESXi’s install ISO. I did so, installed again, and was up and running.

I did have to create a new datastore on the mSATA card — my original server had used a small Vertex 2 SSD from OCZ, which obviously wouldn’t work here. But I was able to upload my backed up OVF files and bring up the VMs very quickly.

And one warning I’ll bring up is that the unit does get a bit warm, and if you use a metal USB flash drive, it will get hot to the touch. My original ESXi lab box used a plastic-shelled USB drive, and I’m inclined to go back to that.

What’s next, Robert?

My next step is going to be bringing central storage back. There is a new HP MicroServer N54L on the market, but my N40L should be sufficient for now–put the 16GB upgrade in and load it up with drives. As those of you who saw my lab post last year know, it was running FreeNAS 8, but I’m thinking about cutting over to NexentaStor Community Edition.

I’ve taken the original Shuttle box and replaced a mid-tower PC with it for my primary desktop. I will probably set the other one up with a Linux of some sort.

And in a week or so I’ll grab a second NUC and build it out as a second cluster machine for the ESXi lab. All five of them are slated to go into my new EXPEDIT shelving thingie in the home office, and I’ll bring you the latest on these adventures as soon as they happen.