Looking back on InteropITX 2017 – the good, the bad, and the future

My fifth Interop conference is in the books now. Let’s take a look back and see how it turned out, and where I think it will go next year. See disclosures at the end if you’re into that sort of thing.

Ch-ch-ch-changes…

The event scaled down this year, moving down the strip to the MGM Grand Conference Center after several years at Mandalay Bay. With the introduction of a 30-member advisory board from industry and community to support the content tracks, Interop moved toward a stronger content focus than I’d perceived in past events.

The metrics provided by Meghan Reilly (Interop general manager) and Susan Fogarty (head of content) showed some interesting dynamics in this year’s attendance.

The most represented companies had 6-7 attendees each, as I recall from the opening callouts, with an average of about 2 people per company. More than half of the attendees were experiencing Interop for the first time, and nearly two thirds were management as opposed to practitioners.

The focus on IT leadership, from the keynotes to the leadership and professional development track for sessions, was definitely front and center.

How about that content?

Keynotes brought some of the big names and interesting stories to InteropITX. There wasn’t always a direct correlation, but there was some interesting context to be experienced, from Cisco’s Susie Wee talking about code and programmability in an application world (and getting the audience to do live API calls from their phones), to Kevin Mandia of Fireeye talking about real world security postures and threat landscapes. Andrew McAfee brought the acronym of the year to the stage, noting that often the decisions in companies are not made by the right person, but the HiPPOs — Highest Paid Person’s Opinion.

With five active tracks, there was content for everyone in the breakouts this year as well. Some tracks will need larger rooms next year (like the Packet Pushers Future Of Networking, which seemed to demand software-defined seating when I tried to get in) and others may need some heavier recruiting.

Attendees can access the presentations they missed (check your Interop emails), and some presentations may have been posted separately by the presenters (i.e. to Slideshare or their own web properties) for general access. Alas, or perhaps luckily, the sessions were not recorded, so if you haven’t heard Stephen Foskett’s storage joke, you’ll have to find him in person to experience it.

Panic at the Expo?

But the traditional draw of Interop, its expo floor (now called the Business Hall), was still noteworthy. With over a hundred exhibitors, from large IT organizations like VMware to startups and niche suppliers, you could see almost anything there (except wireless technology, as @wirelessnerd will tell you about here). American Express OPEN was even there again as well, and while they couldn’t help with fixing Amex’s limited retort to Chase Sapphire Reserve (read more about that on rsts11travel if you like), they were there to help business owners get charge card applications and swag processed.

The mega-theatre booths of past years were gone, and this year’s largest booths were 30×30 for VMware and Cylance among others.

Some of the big infrastructure names were scaled way back (like Cisco, with a 10×10 along with a Viptela 10×10 and a Meraki presence at the NBASET Alliance booth) or absent (like Dell, whose only presence was in an OEM appliance reference, and HPE, who seem to have been completely absent).

These two noteworthy changes to the expo scene were probably good for the ecosystem as a whole, with a caveat. With a more leveled playing field in terms of scale and scope, a wider range of exhibitors were able to get noticed, and it seemed that the booth theatre model and the predatory scanner tactics were mostly sidelined in favor of paying attention to people who were genuinely interested.

The caveat, and a definite downside to the loss of the big names, was that Interop was one of the last shows that gave you a chance to see what the “Monsters of IT Infrastructure” were doing, side by side, in a relatively neutral environment. For this year at least, VMworld is probably as close as you will get to the big picture.

Some of this may have to do with the conference ecosystem itself; Dell EMC World was the previous week in Las Vegas, with HPE Discover the first full week of June and Cisco Live US the last full week of June. These events often occupy speakers and exhibition staffs for weeks if not months beforehand, and the big players also had events like Strata Hadoop World in London to cope with as well. (See Stephen Foskett’s Enterprise IT Calendar for a sense of the schedule.)

Will the “Monsters of IT” come back next year?

I’d like to see them return, as fresh interest and opportunity is a good way to sustain growth, but I have a feeling that focusing on their owned-and-operated events and away from the few (one?) remaining general IT infrastructure event is likely to continue. They may just field speakers for the content tracks and assume that people will come to them anyway.

Meanwhile,  smaller players will continue to grow. While they appear to just be nipping at the heels of the big players, they’re building a base and a reputation in the community, and they don’t need to beat the Cisco/Dell/HPE scale vendors to succeed. So maybe everyone wins.

But what about InteropNet?

The earliest memory I have of Interop, from my 2013 visit, was finding a pair of Nortel Passport (nee Avaya ERS) 8600 routing switches in the InteropNet network. InteropNet has been a demonstration platform that brought together a wide range of vendors including routing and switching, wireless, and software layers (monitoring and management in particular), and it was noticeably absent this year as well.

Part of this may be due to the smaller size of the Business Hall, but part is also due to the cost (time and money at least) of setting up and operating the multivendor environment. The absence of most of the enterprise network hardware vendors may also have played into it, although I don’t know if that was a cause or an effect. As fascinating as Extremo the Monkey was, I don’t think an all-Extreme Networks InteropNet would have really demonstrated interoperability that well.

I didn’t talk to any of the network vendors who weren’t there, but some of the software layer vendors were unabashedly disappointed by the loss of InteropNet. It’s one thing to show a video recording or demo over VPN back to a lab somewhere, but it’s a much more convincing story to show how your product or service would react to a real world environment that your prospective customer is a part of, at that moment.

There were a number of OEM/ODM type network (and server) manufacturers, as well as software-defined networking companies like Cumulus and 128 Networks, but I think at least one big name would have to be there to make InteropNet work. Two or three would make it even better.

One interesting thought to make InteropNet more interesting and practical would be for a hardware refurbisher or reseller to bring in gear from the big names and set it up. Whether it’s ServerMonkey or another vendor of that class, or even a broad spectrum integrator like Redapt, it would be a good way to show a less-than-bleeding-edge production-grade environment that might appeal more to the half of the attendees whose companies are smaller than 1000 people. It would be a great opportunity for companies like that to showcase their consulting and services offerings as well.

Looking into the rsts11 crystal ball…

I don’t remember any mention of venue for next year, but I would guess some rooms and locations would be tweaked to optimize MGM Grand for InteropITX 2018. It’s very convenient for economical rooms and minimal leaving-the-hotel-complex requirements for attendees.

The new tracks structure worked, for the most part, although I expect adjustment and evolution in the content. Don’t be surprised if more hands-on sessions come around. Even though wireless tech was in short supply in the Business Hall, it was very popular in the breakouts.

I’m not expecting the Monsters of IT to have a resurgence in 2018, although it might be a good thing if they did. More security, management and automation, and some surprising new startups, are more likely to find their way into the Business Hall.

Where do we go from here?

I was asked at Interop for suggestions on how to make InteropNet more practical next year. I had some ideas above, but I could use some help. Do you feel that it was an unfortunate omission, or were you more inclined toward “I wouldn’t say I was missing it, Bob” ??

We’ll have some more coverage in the next couple of weeks, including another update on NBase-T network technology (which made a much more substantial showing in terms of available-to-buy-today offerings this year), so stay tuned to our “interop” tag for the latest.

And of course, while it’s too early for me to apply for media credentials, it’s not too early to start thinking about InteropITX 2018.

Registration isn’t quite ready yet, but you can sign up to be notified (and get updates on submitting to present next year as well!). Click above or visit interop.com to join the notification list today!

Disclosure: I attend InteropITX as independent media, unrelated to and unaffiliated with my day job. Neither UBM/InteropITX nor any vendor covered have influence over or responsibility for any of my coverage.

What verse are we on? The fifth! Back at Interop ITX Las Vegas

I’m back in Las Vegas for my fourth time this MLife season, and my fifth time at Interop (now Interop ITX). And it’s a little bit different this year. [Disclosures below]

Quick takes:

The most obvious change is the venue; they announced at the end of Interop 2016 that the event would move to MGM Grand’s Conference Center, one  Las Vegas block down the Strip from its previous home at Mandalay Bay. This means a smaller, more focused event, as MGM has a smaller facility than Mandalay, but it likely also means more affordable accommodations at the event hotels. (I would have enjoyed an extra Amex FHR stay at Delano, but Signature at MGM is good enough.)

Some staff changes have happened, particularly Meghan Reilly taking the reins of the event from Jennifer “JJ” Jessup, who moved on to a different company and role after last year’s event. JJ and the team encouraged me to stay involved with the event even after going to the Dark Side, and I’m grateful for her influence over the past few years. But I haven’t seen any fallout from the transition yet. The staff keeps things going, even with the traditional Monday hiccups on food and beverage logistics.

There also seems to be more of a focus on the educational content as opposed to the expo floor. Well over a dozen in-depths events will occupy each day Monday and Tuesday, with prominent names from various corners of the IT ecosystem. The “Business Hall” is still there, and will have about a hundred exhibitors according to the Interop website, but people have noticed many of the big names of past years scaled back or passing on the event altogether.  I’ve also seen some of my perennial favorites sit this one out.

I would say both of these items are good, for various reasons. While it was beneficial to have the Monsters of IT(tm) on the floor pitching their latest wares, I would expect this year to allow more of a focus on new, more agile, more adaptable players in the market. And with what seems (to me at least) to be a stronger focus on content vs exhibitors, the event becomes even more of a unique, substantially community-driven, substantially vendor-independent tech conference.

It’s true that if you want to see Cisco, Dell, and HP side by side, you’re mostly out of luck unless you find a third party proprietary conference (like VMworld or SAP Sapphire), but I expect that increased exposure to the new and rising players will have a positive effect on some of the larger companies. As each of the giants realizes they can’t differentiate based on their own true believers alone–and to be honest, that’s the core of each vendor’s own conference–perhaps they’ll come back to the table.

It’s also true that, if you are looking for more general IT and technology coverage than the USENIX events offer, especially around the business and culture side of IT, Interop ITX is pretty much the only game left in town.

Where do we go from here?

I’ll be heading into some content today and tomorrow, in between working on some other slides and writing. If you’re brave, follow me on Twitter at @gallifreyan for realtime observations, or if you’re attending Interop ITX, follow me on the app.

Disclosure: I attend Interop as independent media, on personal vacation time, not under the auspices of my day job. Tech Field Day generously brought me here my first two years, but for the past three years inclusive, I have attended on my own dime (although Interop does provide media attendees with lunch and coffee as well as full access to the conference). Any opinions in my coverage of the event are mine alone, and have not seen prior review by anyone involved in the event.

Further disclosure: autocorrect is being religious as I write this on my iPad. JJ’s last name became Jesus quite often, and apparently Apple wants Interop to have a stronger focus on convent. I’ll have nun of that, thank you.

Alice in Storageland, or, a guest blog at MapR’s site

‘I could tell you my adventures—beginning from this morning,’ said Alice a little timidly: ‘but it’s no use going back to yesterday, because I was a different person then.’

–Lewis Carroll, “Alice’s Adventures in Wonderland

mapr-blog-snippetI was invited to guest-blog on MapR’s site recently, in preparation for a webcast I’m doing next week with their VP of Partner Strategy, Bill Peterson. MapR is known for a highly technical blog, but I’ve learned and shown that even technical things can be a bit entertaining now and then.

So, after a turn of phrase that brought Lewis Carroll to mind, you can go see a couple of Alice references and, in a strange sort of way, how they fit my evolution into storage administration–not entirely unlike my evolution into business intelligence and big data and most of the other stuff I’ve ever made my living at.

Visit the posting, “It’s no use going back to yesterday’s storage platform for tomorrow’s applications,”  on MapR’s blog site, and if you’d like come through the looking-glass with Bill and I on Wednesday, January 25, 2017, register with the links on that page.

As an aside, I promise that Bill is not the one mentioned in “The Rabbit Sends a Little Bill.”

 

Photo credit: Public domain image from 1890, per Wikimedia Commons

Disclosure: I work for Cisco; these blogs (rsts11 and rsts11travel) are independent and generally unrelated to my day job. However, in this case, the linked blog post as well as the referenced webinar are part of my day job. The humor is my own, for which I am solely responsible, and not at all sorry. 

Links updated March 20, 2017, due to MapR blog site maintenance.

What a long, strange year it’s been… Year one at Cisco

I’m writing this post on June 23, 2015, from a hotel in Boston. On June 23, 2014, I walked into building 9 on the Cisco campus in San Jose, taking my first job in almost 20 years with no hands-on sysadmin responsibilities. I’ll admit, it was terrifying in a way.

Tell me more, tell me more…

I had just come home a month earlier from Cisco Live 2014 in San Francisco. When I got on the train to go home that Thursday afternoon in May, I couldn’t have told you that it would be my last sponsored visit with Tech Field Day, or my last trade show as a regular customer. But when I woke up the next morning to a voicemail from my soon-to-be manager at Cisco, I made the decision promptly and prepared to hang up my oncall pager.

In the year between last June 23 and this June 23, I seem to have built a personal brand as a big data safari tour guide, complete with the safari hat you see in my profiles around the Internet. I’ve presented to internal sales engineering teams, my VP’s leadership team, partners and customers, vendor theatre audiences at Strata+Hadoop World and Cisco Live, as well as keynoting three Big Data Everywhere events. And in the highest honor so far, I was chosen to give a breakout session at Cisco Live earlier this month in San Diego.

I’ve brought context, proportion, and no small amount of humor to the topic of big data at Cisco, as well as sharing my experience with systems management and real-world Cisco UCS deployment, and while I’ve still got work to do, it’s gone fairly well so far. I’ve had customers say “oh, I’ve read your blog, we’d like to talk to you” and “if you’ve got the hat with you, could you put it on?” I’ve been told that VPs are noticing what I do in a positive sense. And once again I’m pretty well known for my coffee addiction as well.

There have been a couple of downsides… seeing as I’ve gone over the dark side (and still can’t find the cookies), I can’t be a Tech Field Day delegate anymore. I also lost Cisco Champion (although I’m still a Champion Emeritus and a supporter of the program whenever I can be) and PernixPro (for reasons I’m not 100% sure of) status. And of course, the free Disney parks admissions went away very quickly. But the benefits of the change definitely outweigh the downsides; I still get invited to the TFD parties, and I can buy my park hopper passes when I need them.

So where do we go from here?

When this trip is done, I’ll be home for about two months, and will be focusing on some of the more hands-on technical stuff I’ve postponed, with the help of a couple of spare electrical circuits for my home lab. I have a couple of speaking engagements likely on the horizon, and probably some booth babe duty as well.

I’ll also be catching up on my Interop coverage from last month… I feel bad about neglecting a couple of those interviews but a couple of work obligations came up and ate most of May. I still have that citizen-analyst role to play from time to time, even though I don’t have mouse ears to take off to play that role anymore.

But for now, I want to thank everyone who’s made this year of incredible growth possible, from the bosses who (perhaps unintentionally) convinced me to prove that my message had an audience, to friends at Cisco who convinced me that there might be a place for me here, to the leaders and colleagues and partners who continue to remind me regularly that what I have to say matters and helps people both inside and outside Cisco.

I’ll leave you with what was an unexpected cap on the end of year one… I gave my “What could possibly go wrong? Tales from the Trenches of Big Data” talk a third time at Big Data Everywhere in Boston this morning. A reporter from CRN, the channel marketing website, was in the front row taping and taking notes… and my “plan for failure” message resonated enough to get mentioned on CRN today.

I may not be a vice president, but I’m still doing work I love, with people I admire and respect (and who often reciprocate), and who knows, I may end up in your neighborhood soon using 20th century pop lyrics and terrible puns to make sense of big data. See you real soon….

Cisco UCS for beginners – an end-user’s overview

Update: At the time I wrote this post (February 2014), I was not a Cisco employee. Since then (as of June 2014) I have gone to work for Cisco. This shouldn’t change anything about the post, and it is still just me and not an official publication, but since the original disclaimer below is not currently accurate, I thought I would clarify that.

I’ve been working on a series of posts about upgrading an integrated UCS environment, and realized about halfway through that a summary/overview would make sense as a starting point.

I recommend a refreshing beverage, as this is longer than I’d expected it to be.

I will note up front that this does not represent the official presentation of UCS by Cisco, and will have errors and omissions. It does reflect my understanding and positioning of the platform, based on two years and change of immersive experience. It is also focused on C-Series (rack-mount servers), not B-Series (blade servers and chassis), as I have been 100% in the C-series side of the platform, although I try to share a reasonable level of detail that’s applicable to both. And I expect it will provide a good starting point to understanding the Unified Computing System from Cisco.

Unified Computing System – Wait, What?
UCS, or Unified Computing System, is Cisco’s foray into the server market, with integrated network, storage, management, and of course server platforms. As a server admin primarily, I think of it as a utility computing platform, similar to the utility storage concept that 3PAR introduced in the early 2000s. You have a management infrastructure that simplifies structured deployment, monitoring, and operation of your servers, reducing the number of inflection points (when deployed properly) to coordinate firmware, provisioning, hardware maintenance, and server identity.
ucs rack layoutUCS includes two types of servers. The original rollout in 2009 included a blade server platform, generally known as B-Series or Chassis servers. I would guess that 9 out of 10 people you talk to about UCS think B-Series blades when you say UCS. Converged networking happens inside the blade chassis on an I/O Module, or IOM, also known as a Fabric Extender, or FEX. Local storage lives on the blades if needed, with up to 4 2.5″ drives available on full-width blades (2 drives on half-width), and a mezzanine card slot for a converged network adapter and/or a solid state device.
At some point along the way, it seems customers wanted more storage than a blade provides, and more I/O expansion capacity, so Cisco rolled out a rack-mount product line, the C-Series “pizza box” servers, which provided familiar PCI-e slots, no less than twice the drive bays (8 2.5″ or 4 3.5″ on the lowest storage density C200/C220 models), and an access convergence layer outside the server in the form of a Fabric Extender, or FEX, a Nexus 2200-series switch.
Both platforms are designed to go upstream to a Fabric Interconnect, or FI, in the form of a UCS 6100 or 6200 series device. The FI is the UCS environment’s egress point; all servers (blade and/or rack-mount) in a single UCS domain or “pod” will connect to each other and the outside world through the FI. Storage networking to FCoE and iSCSI storage devices happens at this level, as does conventional Ethernet uplink.

So far it sounds pretty normal. Isn’t it?

You can use Cisco UCS C-series rack-mount servers independently without a FI, in the same way you might use a Dell PowerEdge R-series or HP ProLiant DL-series server. They work in standalone mode with a robust integrated management controller (CIMC) that is analogous to iDRAC or iLO, and they present as industry standard servers. The fully-featured CIMC functionality is included in the server (no add-on licensing, even for virtual media), and there’s even a potent XML API for the standalone API.
Many of the largest deployments of Cisco UCS C-Series servers work this way, and in the early days of my deployment, it was actually the only option (so we had standalone servers running bare metal OSes managed on a per-server basis). And for storage-dense environments, this method does have its charm.
The real power of the UCS environment, however, comes out when you put the servers under UCS Manager, or UCSM. This is what’s called an “integrated” environment, as opposed to a “standalone” environment where you manage through the individual CIMC on each server.
ucs model based frameworkUCSM lives inside the Fabric Interconnect, and is at its core a database of system elements and states called the Data Management Engine or DME. The DME uses Application Gateways to talk to the managed physical aspects of the system–server baseboard (think IPMI), supported controllers (CNAs and disk controllers), I/O subsystem (IOM/FEX), and the FI itself.
UCSM is both this management infrastructure, and the common Java GUI used to interact with its XML API. While many people do use the UCSM Java layer to monitor and manage the platform, you can use a CLI (via ssh to the FI), or write your own API clients. There are also standard offerings to use PowerShell on Windows or a Python shell on UNIX to manage via the API.

What’s this profile stuff all about?

A key part of UCS’s benefit are the concepts of policies, profiles, and templates.
Policy is a standard definition of an aspect of a server. For example, there are BIOS policies (defining how the BIOS is set up, including C-state handling and power management), firmware policies (setting a package of firmware levels for system BIOS, CIMC, and supported I/O controllers), disk configuration policies (providing initial RAID configuration for storage).
UCS service profileA Service Profile (SP) contains all the policies and data points that define a “server” in the role sense. If you remember Sun servers with the configuration smart card, that card (when implemented) would contain the profile for that server. In UCS-land, this would include BIOS, firmware, disk configuration, network identity (MAC addresses, VLANs, WWNs, etc) and other specific information that gives a server instance its identity. If you don’t have local storage, and you had to swap out a server for another piece of bare metal and have it come up as the previous server, the profile has all the information that makes that happen.
A Service Profile Template provides a pattern for creating service profiles as needed, providing consistency across server provisioning and redeployment.
There are also templates for things like network interfaces (vNIC, vHBA, and iSCSI templates) which become elements of a Service Profile or a SP Template. You might have a basic profile that covers, say, your web server design. You could have separate SP templates for Production (prod VLANs, SAN configuration) and Test (QA VLANs, local disk boot), sharing the same base hardware policies.
And there are server pools, which define a class of servers based on various characteristics (i.e. all 96GB dual socket servers, or all 1U servers with 8 local disks, or all servers you manually add to the pool). You can then associate that pool with a SP template, so that when a matching server is discovered in your UCS environment, it gets assigned to an appropriate template and can be automatically provisioned on power-up.
There are a lot more features you can take advantage of, from logging and alerting to call-home support features, to almost-one-click firmware upgrades across a domain, but that’s beyond the scope of this post.

I hear you can only have 160 servers though.

This is true, in a sense, much like you can only have 4 people in a car (but you can have multiple cars). A single UCS Manager can handle 160 servers between B-Series and C-Series. This is probably a dense five datacenter racks’ worth of servers, or 20 blade chassis, or some mix thereof (i.e. 10 chassis of 8 B-Series blades each, plus 80 rack-mount C-Series servers). But that’s not as bad a limitation as some vendors make it out to be.
You can address the XML API on multiple UCS Manager instances. A management tool might check inventory on all of your UCSM domains to find the element (server, policy, profile) that you want to manage, and then act on it by talking to that specific UCSM domain. Devops powers activate? This will get confusing if you create policies/profiles/templates at different times (i.e. while you’re waiting for your tools team to write a management tool).

But there’s something easier.

UCS Central is a Cisco-provided layer above the UCSM instances, that provides you with central management of all aspects of the UCS Manager across multiple domains. It’s a “write once, apply everywhere” model of policies and templates, that allows central monitoring and management of your environment across domains and datacenters.
UCS Central is an add-on product that may incur additional charges, especially if you have more than five UCS domains to manage. Support is not included with the base product. But when you get anywhere close to that scale, it may well be worth it. Oh, and in case you didn’t see this coming, there’s an XML API to UCS Central as well.

I don’t have a six figure budget to try this out. What can I do?

I’m glad you asked. Cisco makes a free “Platform Emulator” available. It’s a VM commonly referred to as UCSPE, downloadable for free from Cisco and run under the virtualization platform of your choice (including VMware Player, Fusion, Workstation, or others). 
Chris Wahl has a video demonstrating the download process and a series introducing the Cisco UCS Platform Emulator here on Youtube. You can get the actual downloads at Cisco’s Communities Site and bring the emulator up on your own computer.
Chris Wahl UCS PE screenshotThe UCSPE should let you get a feel for how UCSM and server management works, and as of the 2.2 release lets you try out firmware updates as well (with some slightly dehydrated versions of the firmware packages).
It obviously won’t let you run OSes on the emulated servers, and it’s not a replacement for an actual UCS server environment, but it will get you started.
If you have access to a real UCS environment, you can back up that physical environment’s config and load it into the UCSPE system. This will let you experiment with real world configurations (including scripting/tools development) without taking your production environment down.

Is Cisco UCS the right solution to everything?

grumpy-cat

Grumpy cat says “No.” And I just heard my Cisco friends’ hearts drop. But hear me out, folks.
To be completely honest, the sweet spot for UCS is a utility computing design. If you have standard server designs that are fairly homogeneous, this is a very good fit. If your environment is based around some combination of Ethernet, iSCSI, and FCoE, you’re covered. If your snowflake servers are running under a standard virtualization platform, you’re probably covered as well.
On the other hand, if you build a 12GB server here, a 27.5GB server there, a 66GB server with FCoTR and a USB ballerina over there, it’s not a good fit. If you really need to run 32-bit operating systems on bare metal, you’re also going to run up against some challenges. Official driver support is limited to 21st Century 64-bit operating systems.
If you have a requirement for enormous local storage (more than, say, 24-48TB of disk), there are some better choices as well; the largest currently available UCS server holds either 12 3.5″ or 24 2.5″ drives. If you need a wide range of varied network and storage adapters beyond what’s supported under UCS (direct attach fibre channel, OC3/OC12 cards, modems, etc.), you might consider another platform that’s more generic.
Service profiles let you replace a server without reconfiguring your environment, but if every server is different, you’re not going to be able to use service profiles effectively. You can, of course, run UCS C-Series systems in standalone mode, with bare metal OSes or hypervisors, and they’ll work fine (with the 32-bit OS caveat above), and many companies do this in substantial volume, but you will lose some (not all) of the differentiation between Cisco UCS and other platforms.

Disclaimers:

I’ve worked with Cisco UCS as part of my day job for about two years. I don’t work for Cisco, and I’m not posting this as a representative of my employer or of Cisco. Any errors, omissions, confusion, or mislaid plans of mice and men gone astray are mine alone.

More details:

Images other than Grumpy Cat above borrowed under belief of fair use from the Cisco UCS Manager Architecture paper, the Understanding Cisco Unified Computing System Service Profiles paper, and the fine work of Chris Wahl of WahlNetwork.com.