Cisco C22 M3 “Build” report: From Zero to vSphere in… two days?

Hi folks. The pile of project boxes in my home lab has gotten taller than I am, so when a Twitter follower asked me about running VMware vSphere on one of the systems not too far down in the stack, I took the challenge and said I’d try to get it going to see what I could report back.

Disclosure: While my day job is with Cisco, this computer was purchased out of my own pocket and used no proprietary/employee-only access to software or information. I do not provide end-user support for Cisco gear, nor do I recommend using used/aftermarket gear for production environments.

That system is a now-discontinued Cisco UCS C22 M3S. Yes, C22, not C220. It was an economy variant of the C220, more or less, with a lower cost and lower supported memory capacity as I recall. The one I have features a pair of Intel Xeon E5-2407 v2 processors (quad core 2.4GHz) and 48GB of RAM. The RAID controller is LSI’s 9220-8i, and for now I have a single 73GB hard drive installed because that’s what I found on my bench.

This is a standalone system, even though it’s sitting underneath a UCS 6296 Fabric Interconnect that’s freshly updated as well. I have the two on-board Gigabit Ethernet ports as well as a 4-port Gigabit Ethernet add-on card. And by way of disclosure, while I do work for Cisco and probably could have gotten a better box through work, I bought this one in a local auction out of my own pocket.

Warming up the system

The first thing I needed to do was make sure firmware, management controller, and so forth were up to date and usable. Cisco has long followed the industry standard in servers by making firmware and drivers freely available. I wrote about this back in 2014, when HPE decided to buck the standard, even before I worked for Cisco. You do have to register with a valid email address, but no service contract or warranty is required.

Since I was going to run this machine in standalone mode, I went to the Cisco support site and downloaded the Host Update Utility (HUU) in ISO form.

Updating firmware with the Host Update Utility (HUU) ISO

I loaded up Balena Etcher, a program used to write ISO images and other disk formats to USB flash drives. USB ports are easy to come by on modern computers, but optical drives are not as common. I “burned” the ISO to a flash drive and went to boot it up on the C22.

No luck. I got an error message on screen as the Host Update Utility loaded, referring to Error 906, “firmware copy failed.”

Doing some searching, I found that there were quirks to the bootability of the image. A colleague at Cisco had posted a script to the public community site in 2014, and updated it in 2017, which would resolve this issue. So I brought up my home office Linux box (ironically a HPE Microserver Gen8 that I wrote about in January), copied the script and the iso over, and burned the USB drive again with his script. This time it worked.

Recovering a corrupted BIOS flash image with recovery.cap

Alas, while four of the five firmware components upgraded, the BIOS upgrade was corrupted somehow. Probably my fault, but either way I had to resolve it before I could move forward.

Corrupt bios recovery, before and after

Seemed pretty obvious, and I figured the recovery.cap file would have been copied to the flash drive upon boot, but I figured wrong. You have to extract it from a squashfs archive inside the HUU ISO file. There’s even a ‘getfw’ program in the ISO to do this. Easy, right?

Of course not.

Turns out newer versions of OpenSSL won’t decrypt the filesystem image and extract the needed file, and even my year-out-of-date CentOS 7 box was too new. So I spun up a VM with the original CentOS 7 image and extracted there.

  1. Get the HUU for your system and UCS version (don’t use a C22 BIOS on a C240 or vice versa, for example).
  2. Mount or extract the ISO file
  3. Copy the GETFW/getfw binary out
  4. Unmount the ISO file
  5. ./getfw -b -s <HUU ISO FILE> -d .

This will drop a “bios.cap” file in the current directory. Rename it to “recovery.cap” … put it on a flash drive (plain DOS formatted one is fine), put it into the system, and reset your machine. You’ll go from the first screen with “Could not find a recovery.cap file..” to the second screen transferring to controller. And in a few minutes, your system should be recovered.

Preparing to boot the system

This is the easiest part, in most cases,  but there are a couple of things you may have to modify in the Integrated Management Controller (IMC) and the LSI WebBIOS interface.

Set your boot order. I usually go USB first (so I don’t have to catch the F6 prompt) followed by the PCIe RAID card. The RAID card will only show up if supported and bootable drives are installed though. This can be changed on the fly if you like, but I prefer to do it up front.

Check your RAID controller settings. Follow the BIOS screen instruction for going into WebBIOS (the text interface to configuring the RAID card), and make sure that you have disks presented in virtual drives. I had plugged a UCS drive and a random SSD in and only the UCS drive (a 73GB SAS drive) showed up. It did not appear to the F6 Boot Order menu though, as it was not set bootable in WebBIOS. A few key taps fixed this, and the drive appeared. Again, you can change the boot order after installing, but why not do it first?

Moving forward with VMware installation

This is the easy part, more or less. I went to VMware’s site and grabbed the Cisco custom ISO (which should have current drivers and configurations for Cisco components, especially the RAID controller and network cards). You can also install with the standard vSphere installer if you like.

I burned the 344 MB ISO to a flash drive, finding again that Etcher didn’t like it (complaining not being a bootable ISO) but Rufus did. With a Rufus-burned 8GB drive (choose “yes” to replace menu.c32 by the way), I was able to install the vSphere system and bring it up.

On first install attempt, I did see this message for about a second, and had no drives show up.

Turns out this error warns you that log files are not stored permanently when booting from a USB installation drive, and it was unrelated to the missing drives (which didn’t show up because I originally had an unconfigured SSD and no configured drives installed–see previous section to resolve this).

But when I had the hard drive configured, the install went smoothly.

It is somewhat funny that I’m working with 48GB of RAM and only 60ish GB of storage at the moment, but from here I was able to copy over my OS installation ISOs (8GB over powerline networking made it an overnight job) and bring up my first VM on the new system.

So where do we go from here?

For now, the initial goal of confirming that vSphere will install neatly on a C22 M3 with the 9220-8i RAID controller has been accomplished.

Next up, adding some more storage (maybe SSD if I can find something that will work), maybe bumping the RAM up a bit, and doing something useful with the box. It only draws 80-100 watts under light use, so I’m okay with it being a 24/7 machine, and it’s quiet and in the garage so it shouldn’t scare the neighbors.

If you’re looking to turn up an older Cisco UCS server in your home lab, get familiar with the software center on Cisco.com, as well as the Cisco Community site. Lots of useful information out there as well as on the Reddit /r/homelab site.

Have you rescued old UCS servers for your homelab? Share your thoughts or questions below, or join the conversation on Facebook and Twitter.

 

Alice in Storageland, or, a guest blog at MapR’s site

‘I could tell you my adventures—beginning from this morning,’ said Alice a little timidly: ‘but it’s no use going back to yesterday, because I was a different person then.’

–Lewis Carroll, “Alice’s Adventures in Wonderland

mapr-blog-snippetI was invited to guest-blog on MapR’s site recently, in preparation for a webcast I’m doing next week with their VP of Partner Strategy, Bill Peterson. MapR is known for a highly technical blog, but I’ve learned and shown that even technical things can be a bit entertaining now and then.

So, after a turn of phrase that brought Lewis Carroll to mind, you can go see a couple of Alice references and, in a strange sort of way, how they fit my evolution into storage administration–not entirely unlike my evolution into business intelligence and big data and most of the other stuff I’ve ever made my living at.

Visit the posting, “It’s no use going back to yesterday’s storage platform for tomorrow’s applications,”  on MapR’s blog site, and if you’d like come through the looking-glass with Bill and I on Wednesday, January 25, 2017, register with the links on that page.

As an aside, I promise that Bill is not the one mentioned in “The Rabbit Sends a Little Bill.”

 

Photo credit: Public domain image from 1890, per Wikimedia Commons

Disclosure: I work for Cisco; these blogs (rsts11 and rsts11travel) are independent and generally unrelated to my day job. However, in this case, the linked blog post as well as the referenced webinar are part of my day job. The humor is my own, for which I am solely responsible, and not at all sorry. 

Links updated March 20, 2017, due to MapR blog site maintenance.

Introducing (and Expanding) the Asigra Cloud Backup Connector Appliance (from the Asigra Partner Summit)

As some of you know, I’m starting a new job soon working with software vendors integrating their products around Cisco platforms. While it’s not my day job yet, I’ve been pondering some less explored options to look into when I do get settled in.

This week I’m at the Asigra Partner Summit in Toronto, with my blogger/technologist hats on. I was a bit surprised to run into a Cisco 2900 ISR (Integrated Services Router) with a UCS E-Series blade module in it, in the hands-on-labs area of the Summit. For me, at least, it’s the unicorn of Cisco UCS; I’ve seen an E-Series system twice now, and once was in the Cisco booth at Cisco Live this year.

What’s this Cisco UCS E-Series all about?

284666[1]The Cisco UCS E-Series blade gives you a single Xeon E5 processor, three DIMM slots, 1-2 2.5″ form factor drives, a PCIe slot, and the manageability of standalone UCS servers without the infrastructure overhead that would be cost- and space-prohibitive in a single or dual node B-Series or C-Series deployment. It does not integrate with UCSM, although you can run multiple blades in an ISR. It’s an intriguing platform for remote office/branch office (ROBO) environments, with the capability to integrate your routing/switching/firewall/network services with your utility server needs, including backup and recovery.

But what’s it doing at the Asigra Partner Summit?

As it turns out, this “Asigra Cloud Backup Connector Appliance” deploys the Asigra Cloud Backup software with the ISR and E-Series platform. It makes sense, and while I wish I’d thought of it sooner, or they’d thought of it later, it is a pretty cool idea.

You can use the appliance as a standalone device, running Asigra DS-Client and DS-System software to collect and store your backups on internal storage. You can also use it as an aggregator or data collector running DS-Client, which would send the data to your DS-System server elsewhere (perhaps a standalone server on-site, or a datacenter or hosted vault).

The one catch is that you’re a bit limited on the internal storage. Cisco has certified 1TB SATA and 900GB 10K SAS drives for the E140DP blade, which means you’re capped at 2TB raw in the server. Asigra has incorporated deduplication in their backup software for over 20 years, so depending on your data you’ll probably see 8-10TB (or more) capacity, but you may still hit some limits.

How do we get around this capacity limit?

If you want to use your Cloud Backup Connector Appliance as a standalone service, I see two possible paths, but each has its drawbacks.

First, since the drive bays are standard 2.5 SATA form factor, you could install your own aftermarket 1.5TB or 2TB drives, doubling your capacity to 3-4TB raw. This means you’re managing your own disks though, and it could complicate Cisco support (although if you’re tearing into the gear you probably already know this and understand the risks).

Second, since you have a PCIe slot in the server, I could imagine either installing a PCIe flash card (such as the 3.2TB  Fusion-io “Atomic” ioMemory SX300 card just announced last week) or a SATA/SAS storage controller connected to some sort of external array.

There are two downsides to this second option. Cisco has not announced certification of anything but a quad-port Gigabit Ethernet or single-port 10-Gigabit Ethernet controller in the PCIe slot (so you’re blazing your own trail if you swap them out–they should work, but…). And if you put storage in that slot, you can no longer expand networking, and will be limited to two internal (chassis) ports and two external (RJ45) ports for Gigabit Ethernet networking. Oh, and a third concern is that you lose the encapsulation factor with your storage hanging off of the server rather than being inside the server.

As I ponder the pitfalls to the PCIe expansion option, I find myself wishing for a dual-Ethernet / SAS card similar to what Sun used to sell for Ethernet and SCSI back in the day. I think HP had a single port combo as well. Alas, both of those are antiquated and are PCI-X instead of PCIe. You could use FCoE from the 10-Gigabit Ethernet card if you have that infrastructure in place, but that might be beyond branch office scale as well.

So what are you saying, Robert?

I may be overengineering this. I’ve done that before. Dual 10-Gigabit in my home lab, for example.

For a branch office with ~20 500GB desktops, a pile of mobile devices, and a server or two, with judicious backup policies, you’re in good shape with the standard configuration. Remember, you’re deduplicating the OS and common files, compressing the backed-up data, and leaving the door open to expanding your Asigra deployment as your branch offices grow.

And if you choose to, you can run a hypervisor on your E-Series server, with Asigra DS-Client/DS-Server VMs as well as your own servers, to the limits of the hardware (6-core CPU, 48GB RAM). The system can boot from SD card, leaving the internal disk entirely for functional storage and VM data stores.

Where do we go from here?

Even with the 2TB raw disk limitation (which will probably be addressed eventually by Cisco), you have a very functional and featureful option for small offices, remote offices, and even distributed campus backup and recovery aggregation.

You get all the benefits of Asigra’s software solution, including agentless backup of servers and desktops, mobile device support, dedupe and compression, FIPS 140-2 certified encryption at rest and in flight, and Asigra’s R2A (Recovery and Restore Assurance) for ongoing validation of your backed-up data.

And you get the benefits of Cisco’s ISR and E-Series platforms for your networking services and server implementation. You can purchase pre-installed systems through an Asigra Service Provider, or if you already own an ISR with an E-Series server, your Service Provider can install and license Asigra software on your existing gear.

Disclosure:

I am attending the Asigra Partner Summit at Asigra’s invitation, as an independent blogger, and the company has paid for my travel and lodging to attend. I have not received any compensation for participating, nor have Asigra requested or required any particular coverage or content. Anything related on rsts11.com or in my twitter feed are my own thoughts and of my own motivation.

Also, while I am a Cisco UCS fanboy and soon to be a Cisco employee, any comments, observations, and opinions on UCS are my own, based on my personal experience as well as publicly available information from Cisco and other vendors. I do not speak for Cisco nor should any of my off-label ideas be taken to imply Cisco approval or even awareness of said musings.

How do you solve a problem like Invicta? PernixData and external high performance cache

PernixData and unconventional flash caching

We spent a captivating two hours at PernixData in San Jose Wednesday. For more general and detailed info on the conversations and related announcements, check out this post by PernixData’s Frank Dennenman on their official blog, and also check out Duncan Epping’s post on YellowBricks.

At a very high and imprecise level, PernixData’s FVP came out last year to provide a caching layer (using flash storage, whether PCI-E or SSD) injected at the vmkernel level on VMware hypervisors. One big development this week was the option to use RAM in place of (or in addition to) flash as a caching layer, but this is unrelated to my thoughts below.

One odd question arose during our conversation with Satyam Vaghani, CTO and co-founder of PernixData. Justin Warren, another delegate, asked the seemingly simple question of whether you could use external flash as cache for a cluster (or clusters) using PernixData’s FVP. Satyam’s answer was a somewhat surprising “yes.”

I thought (once Justin mentioned it) that this was an obvious idea, albeit somewhat niche, and having worked to get scheduled downtime for a hundred servers on several instances in the past year, I could imagine why I might not want to (or be able to) shut down 100 hypervisor blades to install flash into them. If I could put a pile of flash into one or more centrally accessible, high speed/relatively low latency (compared to spinning disk) hosts, or perhaps bring in something like Fusion-io’s Ion Accelerator platform.

I took a bit of ribbing from a couple of other delegates, who didn’t see any situation where this would be useful. You always have plenty of extra spare hypervisor capacity, and flash that can go into those servers, and time and human resources to handle the upgrades, right? If so, I mildly envy you.

So what’s this about Invicta?

Cisco’s UCS Invicta platform (the evolution of WHIPTAIL) is a flash block storage platform based on a Cisco UCS C240-M3S rackmount server with 24 consumer-grade MLC SSD drives. Today its official placement is as a standalone device, managed by Cisco UCS Director, serving FC to UCS servers. The party line is that using it with any other platform or infrastructure is off-label.

I’ve watched a couple of presentations on the Invicta play. It hasn’t yet been clear how Cisco sees it playing against similar products in the market (i.e. Fusion-io Ion Accelerator). When I asked on a couple of occasions on public presentations, the comparison was reduced to Fusion-io ioScale/ioDrive PCIe cards, which is neither a fair, nor an applicable, comparison. You wouldn’t compare Coho Data arrays to single SSD enclosures. So for a month or so I’ve been stuck with the logical progression:

  1. Flash is fast
  2. ???
  3. Buy UCS and Invicta

Last month, word came out that Cisco was selling Invicta arrays against Pure Storage and EMC XtremIO, for heterogeneous environments, which also seems similar to the market for Ion Accelerator. Maybe I called it in the air. Who knows? The platform finally made sense in the present though.

Two great tastes that taste great together?

Wednesday afternoon I started putting the pieces together. Today you can serve up an Invicta appliance as block storage, and probably (I haven’t validated this) access it from a host or hosts running PernixData’s FVP. You’re either dealing with FC or possibly iSCSI. It will serve as well as the competing flash appliances.

But when Cisco gets Invicta integrated into the UCS infrastructure, hopefully with native support for iSCSI and FCoE traffic, you’ll be talking about 10 gigabit connections within the Fabric Interconnect for cache access. You’ll be benefiting from the built-in redundancy, virtual interface mapping and pinning, and control from UCS Manager/UCS Central. You’re keeping your cache within a rack or pod. And if you need to expand the cache you won’t need to open up any of your servers or take them down. You’d be able to put another Invicta system in, map it in, and use it just as the first one is being used.

If you’re not in a Cisco UCS environment, it looks like you could still use Invicta arrays, or Fusion-io, or other pure flash players (even something like a whitebox or channel partner Nexenta array, at least for proof-of-concept).

So where do we go from here?

The pure UCS integration for Invicta is obviously on the long-term roadmap, and hopefully the business units involved see the benefits of true integration at the FI level and move that forward soon.

I’m hoping to get my hands on a trial of FVP, one way or another, and possibly build a small flash appliance in my lab as well as putting some SSDs in my C6100 hypervisor boxes.

It would be interesting to compare the benefits of the internal vs external flash integration, with a conventional 10GBE (non-converged) network. This could provide some insight into a mid-market bolt-on solution, and give some further enlightenment on when and why you might take this option over internal flash. I know that I won’t be able to put a PCIe flash card into my C6100s, unless I give up 10GBE (one PCIe slot per server, darn). Although with FVP’s newly-announced network compression, that might be viable.

What are your thoughts on external server-side cache? Do you think something like this would be useful in an environment you’ve worked with? Feel free to chime in on the comments section below.

This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or other Tech Field Day post. I am a Cisco Champion but all Cisco information below is public knowledge and was received in public channels.