How do you solve a problem like Invicta? PernixData and external high performance cache

PernixData and unconventional flash caching

We spent a captivating two hours at PernixData in San Jose Wednesday. For more general and detailed info on the conversations and related announcements, check out this post by PernixData’s Frank Dennenman on their official blog, and also check out Duncan Epping’s post on YellowBricks.

At a very high and imprecise level, PernixData’s FVP came out last year to provide a caching layer (using flash storage, whether PCI-E or SSD) injected at the vmkernel level on VMware hypervisors. One big development this week was the option to use RAM in place of (or in addition to) flash as a caching layer, but this is unrelated to my thoughts below.

One odd question arose during our conversation with Satyam Vaghani, CTO and co-founder of PernixData. Justin Warren, another delegate, asked the seemingly simple question of whether you could use external flash as cache for a cluster (or clusters) using PernixData’s FVP. Satyam’s answer was a somewhat surprising “yes.”

I thought (once Justin mentioned it) that this was an obvious idea, albeit somewhat niche, and having worked to get scheduled downtime for a hundred servers on several instances in the past year, I could imagine why I might not want to (or be able to) shut down 100 hypervisor blades to install flash into them. If I could put a pile of flash into one or more centrally accessible, high speed/relatively low latency (compared to spinning disk) hosts, or perhaps bring in something like Fusion-io’s Ion Accelerator platform.

I took a bit of ribbing from a couple of other delegates, who didn’t see any situation where this would be useful. You always have plenty of extra spare hypervisor capacity, and flash that can go into those servers, and time and human resources to handle the upgrades, right? If so, I mildly envy you.

So what’s this about Invicta?

Cisco’s UCS Invicta platform (the evolution of WHIPTAIL) is a flash block storage platform based on a Cisco UCS C240-M3S rackmount server with 24 consumer-grade MLC SSD drives. Today its official placement is as a standalone device, managed by Cisco UCS Director, serving FC to UCS servers. The party line is that using it with any other platform or infrastructure is off-label.

I’ve watched a couple of presentations on the Invicta play. It hasn’t yet been clear how Cisco sees it playing against similar products in the market (i.e. Fusion-io Ion Accelerator). When I asked on a couple of occasions on public presentations, the comparison was reduced to Fusion-io ioScale/ioDrive PCIe cards, which is neither a fair, nor an applicable, comparison. You wouldn’t compare Coho Data arrays to single SSD enclosures. So for a month or so I’ve been stuck with the logical progression:

  1. Flash is fast
  2. ???
  3. Buy UCS and Invicta

Last month, word came out that Cisco was selling Invicta arrays against Pure Storage and EMC XtremIO, for heterogeneous environments, which also seems similar to the market for Ion Accelerator. Maybe I called it in the air. Who knows? The platform finally made sense in the present though.

Two great tastes that taste great together?

Wednesday afternoon I started putting the pieces together. Today you can serve up an Invicta appliance as block storage, and probably (I haven’t validated this) access it from a host or hosts running PernixData’s FVP. You’re either dealing with FC or possibly iSCSI. It will serve as well as the competing flash appliances.

But when Cisco gets Invicta integrated into the UCS infrastructure, hopefully with native support for iSCSI and FCoE traffic, you’ll be talking about 10 gigabit connections within the Fabric Interconnect for cache access. You’ll be benefiting from the built-in redundancy, virtual interface mapping and pinning, and control from UCS Manager/UCS Central. You’re keeping your cache within a rack or pod. And if you need to expand the cache you won’t need to open up any of your servers or take them down. You’d be able to put another Invicta system in, map it in, and use it just as the first one is being used.

If you’re not in a Cisco UCS environment, it looks like you could still use Invicta arrays, or Fusion-io, or other pure flash players (even something like a whitebox or channel partner Nexenta array, at least for proof-of-concept).

So where do we go from here?

The pure UCS integration for Invicta is obviously on the long-term roadmap, and hopefully the business units involved see the benefits of true integration at the FI level and move that forward soon.

I’m hoping to get my hands on a trial of FVP, one way or another, and possibly build a small flash appliance in my lab as well as putting some SSDs in my C6100 hypervisor boxes.

It would be interesting to compare the benefits of the internal vs external flash integration, with a conventional 10GBE (non-converged) network. This could provide some insight into a mid-market bolt-on solution, and give some further enlightenment on when and why you might take this option over internal flash. I know that I won’t be able to put a PCIe flash card into my C6100s, unless I give up 10GBE (one PCIe slot per server, darn). Although with FVP’s newly-announced network compression, that might be viable.

What are your thoughts on external server-side cache? Do you think something like this would be useful in an environment you’ve worked with? Feel free to chime in on the comments section below.

This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or other Tech Field Day post. I am a Cisco Champion but all Cisco information below is public knowledge and was received in public channels.

7 thoughts on “How do you solve a problem like Invicta? PernixData and external high performance cache

  1. Pingback: How do you solve a problem like Invicta? PernixData and external high performance cache

  2. It sounds interesting. I viewed that PernixData presentation and I agree with Satyam’s statement that you cannot dispute physics. There is no application out there that depends on IOPs over low latency. A flash appliance no matter if it’s in the tightly integrated UCS stack is still a hop away from the compute. You cannot get better than having flash in the server.
    For situations where it’s impossible to get that flash in the server, maybe a flash array coupled with FVP could make sense. It was pretty interesting that Satyam showed that FVP’s ability to intelligently compress network IO resulted in similar IOPs and latency as a local only hit.
    It does seem like a large price to pay though for an all flash array to be used as FVP cache over placing SSD or PCIe flash in a server.

    Like

    • Thanks for the feedback.

      I could see cases where you’re doing large/heavy transactions and the latency is only an initial cost. I could also see a case where you need acceleration for a transitional period. Totally admit it’s expensive, but so is retrofitting 100 production servers in an already-overwhelmed environment. Not even taking into account the possibility that you’re already using a network mezz card or your one PCIe slot is taken (in a blade or a 1u server).

      I’m probably going to do some very low grade comparisons in my lab in the next couple of months, with some hacky imitations of AFAs.

      Like

  3. Pingback: PernixData FVP Bridges an Invisible Gap | eigenmagic

  4. Pingback: PernixData FVP Accelerates All The Things | eigenmagic

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.