How do you solve a problem like Invicta? PernixData and external high performance cache

PernixData and unconventional flash caching

We spent a captivating two hours at PernixData in San Jose Wednesday. For more general and detailed info on the conversations and related announcements, check out this post by PernixData’s Frank Dennenman on their official blog, and also check out Duncan Epping’s post on YellowBricks.

At a very high and imprecise level, PernixData’s FVP came out last year to provide a caching layer (using flash storage, whether PCI-E or SSD) injected at the vmkernel level on VMware hypervisors. One big development this week was the option to use RAM in place of (or in addition to) flash as a caching layer, but this is unrelated to my thoughts below.

One odd question arose during our conversation with Satyam Vaghani, CTO and co-founder of PernixData. Justin Warren, another delegate, asked the seemingly simple question of whether you could use external flash as cache for a cluster (or clusters) using PernixData’s FVP. Satyam’s answer was a somewhat surprising “yes.”

I thought (once Justin mentioned it) that this was an obvious idea, albeit somewhat niche, and having worked to get scheduled downtime for a hundred servers on several instances in the past year, I could imagine why I might not want to (or be able to) shut down 100 hypervisor blades to install flash into them. If I could put a pile of flash into one or more centrally accessible, high speed/relatively low latency (compared to spinning disk) hosts, or perhaps bring in something like Fusion-io’s Ion Accelerator platform.

I took a bit of ribbing from a couple of other delegates, who didn’t see any situation where this would be useful. You always have plenty of extra spare hypervisor capacity, and flash that can go into those servers, and time and human resources to handle the upgrades, right? If so, I mildly envy you.

So what’s this about Invicta?

Cisco’s UCS Invicta platform (the evolution of WHIPTAIL) is a flash block storage platform based on a Cisco UCS C240-M3S rackmount server with 24 consumer-grade MLC SSD drives. Today its official placement is as a standalone device, managed by Cisco UCS Director, serving FC to UCS servers. The party line is that using it with any other platform or infrastructure is off-label.

I’ve watched a couple of presentations on the Invicta play. It hasn’t yet been clear how Cisco sees it playing against similar products in the market (i.e. Fusion-io Ion Accelerator). When I asked on a couple of occasions on public presentations, the comparison was reduced to Fusion-io ioScale/ioDrive PCIe cards, which is neither a fair, nor an applicable, comparison. You wouldn’t compare Coho Data arrays to single SSD enclosures. So for a month or so I’ve been stuck with the logical progression:

  1. Flash is fast
  2. ???
  3. Buy UCS and Invicta

Last month, word came out that Cisco was selling Invicta arrays against Pure Storage and EMC XtremIO, for heterogeneous environments, which also seems similar to the market for Ion Accelerator. Maybe I called it in the air. Who knows? The platform finally made sense in the present though.

Two great tastes that taste great together?

Wednesday afternoon I started putting the pieces together. Today you can serve up an Invicta appliance as block storage, and probably (I haven’t validated this) access it from a host or hosts running PernixData’s FVP. You’re either dealing with FC or possibly iSCSI. It will serve as well as the competing flash appliances.

But when Cisco gets Invicta integrated into the UCS infrastructure, hopefully with native support for iSCSI and FCoE traffic, you’ll be talking about 10 gigabit connections within the Fabric Interconnect for cache access. You’ll be benefiting from the built-in redundancy, virtual interface mapping and pinning, and control from UCS Manager/UCS Central. You’re keeping your cache within a rack or pod. And if you need to expand the cache you won’t need to open up any of your servers or take them down. You’d be able to put another Invicta system in, map it in, and use it just as the first one is being used.

If you’re not in a Cisco UCS environment, it looks like you could still use Invicta arrays, or Fusion-io, or other pure flash players (even something like a whitebox or channel partner Nexenta array, at least for proof-of-concept).

So where do we go from here?

The pure UCS integration for Invicta is obviously on the long-term roadmap, and hopefully the business units involved see the benefits of true integration at the FI level and move that forward soon.

I’m hoping to get my hands on a trial of FVP, one way or another, and possibly build a small flash appliance in my lab as well as putting some SSDs in my C6100 hypervisor boxes.

It would be interesting to compare the benefits of the internal vs external flash integration, with a conventional 10GBE (non-converged) network. This could provide some insight into a mid-market bolt-on solution, and give some further enlightenment on when and why you might take this option over internal flash. I know that I won’t be able to put a PCIe flash card into my C6100s, unless I give up 10GBE (one PCIe slot per server, darn). Although with FVP’s newly-announced network compression, that might be viable.

What are your thoughts on external server-side cache? Do you think something like this would be useful in an environment you’ve worked with? Feel free to chime in on the comments section below.

This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or other Tech Field Day post. I am a Cisco Champion but all Cisco information below is public knowledge and was received in public channels.

Taking POHO to Interop 2014 – Three Roads To Take

I’m looking forward to returning to Interop Las Vegas in under two weeks. Where has the winter gone? I know, I’m in Northern California, I can’t complain much about the weather.


Click above for conference details, or visit this link for a free expo and keynote pass.

There are three aspects of Interop that I’m looking forward to.

First, I’m looking forward to meeting some Twitterverse friends, and maybe a Twitter-averse friend or two, as well as contacts I’ve made at my conferences last year. I will be dropping in on the Interop HQ and Social Media Command Center to see how the UBM team handles social media on-site. As my friends at @CiscoLive and VMworld know, I find the social media aspect of a conference to be as important as the formal content. Networking and getting advice and answers as you go makes the event more efficient and useful, and it’s always good to say hi to the folks who make it happen. I also hear there are collectible pins, and those of you who know where I work know we’re known for our pins, among other things.

Watch the hashtags #Interop and #CloudConnect and follow @interop for the latest news from the events.

cloud-connect-summit-logoSecond, I’ll be trying to take a bootcamp or two at the Cloud Connect Summit  and come up to speed on some technologies that are newish to me. There’s an AWS Boot Camp presented by Bernard Golden (alas, it’s not hands-on, so I’m not sure I’d call it a boot camp), and an OpenStack Boot Camp that looks promising as well. These may end up just being focus opportunities, or I may change my plans, but they look interesting. And as a guy who’s mostly running bare metal big data on a daily basis, it’ll be good to get some exposure to the virtual side of things outside of VMware.

Third, while I’m attending with my press hat and not my mouse ears, I do work in a sizable technology environment, so I’ll be checking out some larger technology options that may not find their way into my lab but may find their way into my day job.

Highlights in the enterprise space for me (alphabetically): Arista Networks, Cisco, Juniper Networks.

tfd-generalFourth, I’ll be joining the Tech Field Day Roundtables again this year. HP Networking will be presenting at this event, and they tie in with POHO below as well. Also presentingwill be a company rather dear to my heart in a strange way, Avaya. At the turn of the century, I worked for the Ethernet Products Group (or whatever we were called that quarter) at Nortel Networks, and my team’s flagship product was the Nortel Passport 8600 routing switch. Imagine my surprise when I ran across a slightly different color of 8600 (with much newer line cards) at the Interop network last year, now known as the Avaya Ethernet Routing Switch 8600. A couple of my Rapid City/Bay Networks/Nortel Networks coworkers are still at Avaya, or were until fairly recently… so it’s sort of a family thing for me.

If you can’t make it to the roundtables, we usually live-stream the presentations, or have them posted afterward, at Check it out and track #RILV14 and #TechFieldDay on Twitter for the latest news.

And last, but not least… there’s POHO. The Psycho Overkill Home Office, a gateway to big business functionality on a small business budget, is a topic near and dear to my blog, my budget, and my two home labs. I will be stopping by to speak with several vendors at Interop whose products intersect with the burgeoning (and occasionally bludgeoning) home lab market and the smaller side of the SMB world (I’m taking to calling it the one-comma-budget side of SMB).

Some of the POHO highlights that I’m seeing so far (in alphabetic order) include Chenbro Micom, Cradlepoint, Linksys (now part of Belkin), Memphis Electronic (think 16GB SODIMMs), Monoprice, Opengear, Shuttle Computer Group, Synology, and Xi3.

There are a lot of other names on the exhibitor list who will appeal to anyone, and if you’re going to be there with an exhibitor who you think would be of interest to my POHO audience, feel free to get in touch (I’m on the media list, or contact me through this blog).

And if you noticed that I went down five roads instead of three, give yourself a pat on the back. I should’ve seen that coming.

Is Licensing Sexy? Asigra Might Think So, And So Might You

We were pleased to welcome Eran Farajun and Asigra back to Tech Field Day with a presentation at the VMworld US 2013 Tech Field Day Roundtables. I’ve also seen them present a differently-focused talk with live demo at Storage Field Day 2 in November 2012.

Disclosure: As a delegate to the Tech Field Day Roundtables at VMworld US 2013, I received support for my attendance at VMworld US. I received no compensation from sponsors of the Roundtables, nor Tech Field Day/Gestalt IT, nor were they promised any coverage in return for my attendance. All comments and opinions are my own thoughts and of my own motivation.

Asigra Who?

Asigra has exclusively developed backup and recovery technology for over 25 years. Let that sink in for a moment. Most of the companies I’ve worked for haven’t been in business for 25 years, and most companies change horses if not streams along the way.

But Asigra continues to grow, and evolve their products, a quarter of a century into the journey. They introduced agentless backup, deduplication (in 1993), FIPS140-2 certification in a cloud backup platform, and a number of other firsts in the market.

One reason you may never have heard of Asigra is that they don’t sell direct to the end user. They work through their service provider and partner network to aggregate access and expertise close to the end user. Of course the company backs their products and their partners, but you get the value add of the partner’s network of support personnel as well. And you might never know it was Asigra under the hood.

So what’s Asigra’s take on licensing?

In 1992, Asigra moved to a capacity-based licensing model, one that many of us are familiar with today. You pay a license fee one way or another based on the amount of data that is pushed to the backup infrastructure. This has been seen in various flavors, sometimes volume-based, sometimes slot-based or device capped. Restores are effectively free, but it’s likely that you rarely use them.

Think in terms of PTO or Vacation days (backup) and Sick Days (recovery). You probably have a certain amount of each, and while PTO may roll over if you don’t use it, those 19 sick days you didn’t use last year went away. Imagine if you could get something for the recovery days you didn’t have to use. Asigra thought about this (although not with the same analogy) and made it happen.

Introducing Recovery License Model

So in 2013, Asigra changed to what they call RLM, or Recovery License Model. You pay part of your licensing for backups, and part for recoveries. There are safety valves on both extremes, so that if you do one backup and have to restore it all shortly thereafter, you’re not screwed (not by licensing, at least–but have a chat with your server/software vendor). And if you have a perfect environment and never need to restore, Asigra and your reseller/partner can still make a living.

Your licensing costs are initially figured on the past 6 months’ deduped restore capacity. (After the first two 6-month periods, you are apportioned based on the past 12 months.) If you restored 25% of your backups, you pay 50 cents per gigabyte per month (list price). If you restored 5% or less of your backups, you’re paying 17 cents per gigabyte per month.

You don’t get fined for failed backups of any sort. Hardware failure, software failure, or some combination–it doesn’t count against you. You also get a waiver for the largest recovery event–so if your storage infrastructure melts into the ground like a big ol’ glowing gopher, you can focus on recovering to new hardware, not appeasing your finance department.

For those of you testing your backup/restore for disaster recovery purposes (that’s all of you, right?), you can schedule a DR drill at 7 cents per gigabyte per month for that recovery’s usage. Once again, it’s deduped capacity, so backing up 1000 VDI desktops doesn’t mean 1000 times 3GB of Windows binaries/DLLs. And your drill’s data expires at the end of the 6 month window, so don’t count on fire drills as permanent backups.

So where do we go from here?

I know a couple of my fellow delegates were disappointed with the focus on Asigra’s licensing innovations, and that there wasn’t more talk of erasure codes and app-centric backups, but they’re probably not the ones writing the checks for software licensing for enterprises. 

Is this the sexiest thing you’ve seen in tech this quarter? Maybe not. I’d point toward Pernix Data and Infinio for that distinction, in all honesty. But Asigra’s RLM is yet another in a series of innovations in what might be the most innovative DR/BC company you’d never heard of before.

Asigra estimates immediate savings of 40%, and long term savings of over 60% by separating backup and recovery costs.

As an aside, Asigra’s latest software version, 12.2 (released earlier in 2013), backs up Google Apps as well as traditional on-site applications and datastores. Support for Office 365 backups is coming soon.


How do you download storage performance? A look at Infinio Accelerator

Many of you joined us (virtually) at Tech Field Day 9 back in June for the world premiere presentation of Infinio and their “downloadable storage performance” for VMware environments.

In the month and a half since we met Infinio, I’ve been planning to write about their presentation and their product. It’s an interesting technology, and something I can see being useful in small and large environments, but I hadn’t gotten around to piling the thoughts into the blog.

I did find that I was bringing them up in conversation more often than I do most Tech Field Day presenters (with the possible exception of Xangati). Whether I was talking to the CEO of a network optimization startup here in Silly Valley, or a sales VP for a well-established storage virtualization player at the Nth Symposium, or a couple of others, I found myself saying the same things. “Have you heard of Infinio? They just made a splash onto the scene at Tech Field Day 9. You should check them out.”

What is an Infinio?

201306 TFD9 Infinio 01 Peter Smith Model

Peter Smith, Director of Product for Infinio, introducing the “Infinio Way” of deploying the Accelerator

Infinio is a two year old, 30ish-person startup whose Accelerator product is designed to be an easy drop-in to your VMware environment. They’re focusing on making the product easy to try (including substantial engineering focus on the installation process), simple and affordable to buy, and visibly useful to your environment as soon as possible.

CEO Arun Agarwal talked up the focus on the installation process, but even more interesting was his focus on the trial and sales model. This seemed important at the time, but as time passed, I really appreciated the idea more.

Just this past week, I downloaded a “free” VM from a much larger company, only to be told in a pushy followup email that I need to provide a phone number and mailing address and get trial licenses and talk to a sales guy on the phone to do anything with the “free” VM. It was annoying enough to get to this point, and I’m disinclined to actually buy and use that product.

I want a company to provide (1) enough information on their website for me to understand the product, (2) a hands-off model for acquiring and trying out the product (even if it’s at 2am on a Saturday because I can’t sleep and I’ve got a hundred servers sitting idle in a datacenter to play with), (3) smart and non-pushy people to help me with understanding, evaluating, and maybe buying the product if I do decide to move forward–when and if I need them, and not the other way around, and (4) a product that really solves the problem.

Infinio plans to provide all these things. You can download the trial without giving a lot of information (or any, as I recall), and you can buy your licenses with a credit card on the site. This would be a refreshing model, and I’m optimistic about their being able to do it.

So what are they doing?

I was wondering that too… and seeing the phrase “downloadable storage performance” a week or so before the visit, I was dubious.

201306 TFD9 Infinio 02 Peter Smith DashboardThe Infinio Accelerator is a virtual NAS (NFS v3) accelerator for VMware. It sits between the vmkernel interface and the storage server on each host, providing a shared, deduplicated caching layer to improve performance across your systems. It also works transparently to both storage and server, so you don’t change your storage settings or ACLs (great for those of us who have siloed storage, networking, and virtualization management teams, and all the efficiencies they provide).

And possibly most impressive of all, you don’t have to reboot anything to install or remove the product.

The management console allows you to toggle acceleration on each datastore, and more importantly, monitor the performance and benefit you’re getting from the accelerator. They call out improvements in response time, request offload, and saved bandwidth to storage.

Let’s make this happen

201306 TFD9 Infinio 03 Peter Smith Improvement

It does make a difference.

Peter Smith demonstrated the Infinio Accelerator for us live, from downloading the installer from the Infinio home page (coming soon) to seeing it make a difference. The process, with questions and distractions included, came in around half an hour.

You download a ~28MB installer, and the installer will pull down about a CD’s worth of VM templates (the Accelerator and the management VM) while you go through the configuration process. (You can apparently work around this download if you need to for network/security reasons–this would be a good opportunity to enlist those smart and non-pushy people mentioned above.)

After the relatively brief installation (faster than checking for updates on a fresh Windows 7 installation, not including downloading and installing all 150 of them, mind you), Peter brought up a workload test with several parallel Linux kernel builds in 8 VMs, demonstrating a 4x speedup with the Accelerator in place even with the memory per VM halved to make room for the Accelerator.

201306 TFD9 Infinio 04 Peter Smith vTARDIS

vTARDIS, MacPro flavor

An aside about making room for Infinio: The accelerator will eat 8GB of RAM, 2 vCPUs, and 15GB of local disk space on each hypervisor host you’re accelerating. It will also use 4GB RAM, 2 vCPUs, and 20GB of storage for the management VM, on one of your hosts. So if your virtualization lab is running on your 8GB laptop, you’re gonna have a bad time, but a quad-core lab system with 32GB of RAM should be practical for testing. A typical production hypervisor host (128GB or more) will probably not notice the loss.

And a further aside about the demo system. As a big fan of Simon Gallagher’s vTARDIS concept of nesting hypervisors, I was pleased to see that the Mac Pro the Infinio folks rolled in for the demo was effectively a vTARDIS in itself. This is a pretty cool way to protect your live demo from the randomness of Internet and VPN connectivities and the very real risk that someone will turn your lab back at the home office into a demo for someone else, if your product lends itself to being demonstrated this way.

Some future-proofing considerations

The team at Infinio were very open to the suggestions that came up during the talk.

They have a “brag bar” that offers the chance to tweet your resource savings, but they understood why some companies might not want that option to be there. Some of us work (or have worked) in environments where releasing infrastructure and performance info without running the gauntlet of PR and legal teams could get us punished and/or fired.

They took suggestions of external integration and external access to the product’s information too, from being able to monitor and report on the Accelerator’s performance in another dashboard, to being able to work with the Accelerator from vCops. And they’re working on multi-hypervisor (read: Hyper-V) support and acceleration of block storage. Just takes enough beer and prioritization, we were told. 

So where do we go from here?

Infinio is releasing a more public beta of the Accelerator at VMworld in San Francisco in just a couple of weeks. Stop by and see them if you’re at VMworld, or watch their website for more details about the easy-to-use trial. You can sign up to be notified about the beta release, or just watch for more details near the end of August.

The pricing will be per-socket, with 1 year of support included[1], and hopefully it will be practical for smaller environments as well as large ones. We will see pricing when the product goes to GA later this year.

I’m planning to get the beta to try out in my new lab environment, so stay tuned for news on that when it happens.

And if you’re one of the lucky ones to get a ticket for #CXIParty, you can thank the folks from Infinio there for sponsoring this event as well. And I may see you there.

Disclosure: Infinio were among the presenters/sponsors of Tech Field Day 9, to which I was a delegate in June 2013. While they and other sponsors provided for my travel and other expenses to attend TFD9, there was no assumption or requirement that I write about them, nor was any compensation offered or received in return for this or any other coverage of TFD9 sponsors/presenters.

Some other write-ups from TFD9 Delegates (if I missed yours, let me know and I’ll be happy to add it):

[1] Update: When we talked with Infinio in June, they planned to include 3 years of support with the initial purchase. They are now planning to include 1 year with renewals beyond that being a separate item. This should make the initial purchase more economical, and make budgeting easier as well.

Rough cut: HP Moonshot and CEO Meg Whitman at Nth Symposium 2013

I gotta say the withdrawal symptoms from daily Disneyland visits are getting milder, but I’m home from a week in Anaheim for HP Storage Tech Day  and Nth Generation’s 13th Symposium. If you didn’t see it, my preview was posted last month here on rsts11.

I’ll have some more detailed thoughts, including at least one topic that I hadn’t really expected to provoke so much thought, in the next few days. But I wanted to touch on two of the highlights from the Symposium while they’re fresh in my mind.

Disclaimer: Travel to HP Storage Tech Day/Nth Generation Symposium was paid for by HP; however, no monetary compensation is expected nor received for the content that is written in this blog.

Quick Overview of Nth Symposium

Nth Symposium is an annual partner and customer summit held by Nth Generation, the leading HP channel partner in southern California. They’ve done this thirteen times now, bringing customer technologists and executives together with HP and partner representatives for a very productive event. It’s free to qualified IT professionals, so I’d suggest checking it out next year if you are in the area.

Two of the three Nth Symposium keynotes were by execs I’ve worked for before. I was farther down the org chart from (now HP CEO) Meg Whitman when I was at the division of eBay in 2006, but she gave the executive welcome at my new hire orientation. I reported to a VP at 3PAR who reported directly to (now HP Storage VP/GM) David Scott back in 2001. I knew both would be very impressive speakers for a keynote.

HP CEO Meg Whitman

HP CEO Meg Whitman (not channeling Clint Eastwood, don't worry)

HP CEO Meg Whitman (not channeling Clint Eastwood, don’t worry)

In a definite score for Nth Generation, they convinced Meg Whitman, president and CEO of HP, to give the headline keynote at this year’s symposium.

Whitman’s ability to know on a detailed level, communicate, and see the path forward for a hugely disparate business that probably seems like it’s going that-a-way at full speed in every direction is impressive.

The high level overview of the company’s direction, and the “New Style of IT,”  was to be expected, but her willingness and ability to field unstaged questions from the audience and respond to them in an honest and aware way was what really impressed me.

“Don’t be shy, remember, I ran for public office.”
–Meg Whitman

The three questions I remember involved cross-border ordering and SKU simplification (so that you can easily order the same model for delivery to multiple countries), support cohesiveness and contactability (and responsibility), and the morass that is

Fellow blogger John Obeto was set up for a question when Meg called out Nigeria as one of the countries that would not see SKU simplification this year. But she acknowledged that the complexity was counterproductive, and that the company is already working to solve the problems for multinational customers.

Another attendee mentioned the challenges of finding the right contact for support, especially (as I recall) when multiple product lines are involved, or when your contacts at HP leave the company. Having had my HP account manager leave after my first order a couple of jobs ago, and having had her replacements actively and effectively lose my followup business in the months that followed, I know what a pain this can be.

Meg acknowledged the problem as a significant one, suggested using a partner or VAR as an aggregator for contacts within HP (since VARs would have more access to experts and resources within the HP organization), and concluded by offering her personal email address and committing to help until other paths are finalized.

But back to John, who came up to the microphone to decry the exclusion of Nigeria for the 2013 SKU project, and to mention something that probably everyone who has tried to used the HP site for anything but B2C e-commerce already knows… that is pretty difficult to navigate. Meg once again acknowledged the problem–see a pattern here?–and said that they were working on the business-to-business (B2B) and business-to-consumer (B2C) sites separately. One is already under a substantial reorganization, and the other will follow as soon as practical.

In general, I got the sense of Meg Whitman as a CEO being not entirely unlike the (parody) President Jimmy Carter’s fireside chat from Saturday Night Live in the late 70s. I wouldn’t ask her about acid experiences, but it seemed if you asked her about something even several levels down in the chain that was affecting customers, she’d know what was going on and be able to respond to it (or be willing to take the question on and find an answer).

The Moonshot heard round the worldnth moonshot on stage

On the topic of, Paul Santeler put further time into the discussion of Moonshot in the talk that followed Meg’s keynote, but as I recall Meg also made the bullet point on Moonshot that now runs on Moonshot rather than a huge farm of servers.

To be specific, they’re using about 720 watts of power to run the whole site. Think about that… as she suggested, you probably use more power on lighting in your home than they do to run a large enterprise web site with support, e-commerce, marketing, and all sorts of other content. (Unless you’ve gone green–I think I’m at a bit under 700W between all the lighting in my home thanks to CFL bulbs, but steady-state power rating for these power supplies is 653W so they win.)

Moonshot is a sub-5U chassis that contains up to 45 server “cartridges” running the Intel Atom S1260 at 2GHz. The cartridge is a bit larger than a Kindle Fire and sports an 8GB ECC dimm, dual gigabit Ethernet (through a central switching module pair), and a single 2.5″ laptop-style hard drive that can be 500GB or 1TB of 7200rpm spinning disk a 200GB MLC SSD.

The 45G switching modules live in the center of the chassis, and the two 6SFP uplink modules give 6 1GBE/10GBE uplinks each via SFP+ connectors. Standard configuration gives you one switch module and one uplink module; the redundancy option is a custom configuration. A 40GBE module is coming soon. The systems are managed via iLO Chassis Management, and multiple systems can be daisy-chained.

If you’d seen the Seamicro systems circa 2009-2010, the Moonshot will seem like at least an evolutionary development from that concept. The first times I spoke with Seamicro about their 10U chassis, I asked about a smaller system, around 4U, with fewer than the stock 64 systems. Moonshot gives nearly the capacity of that 10U system, 40% more system RAM, dedicated per-system storage,  a third the footprint, and a lower power draw.

There are other cartridges coming, including an 8-core 32GB cassette (good for thin virtualization) and a DSP-targeted cassette (voice processing and so forth, running on ARM), so it shouldn’t be a one-trick pony platform. It won’t replace all rackmount and conventional blade servers, but hyperscale is likely to fill a few niches and simplify management and scalability.

So where do we go from here?

I’ve been a fan of 3PAR’s “Utility Storage” platform since I joined the company in 2001. (They’re now buzzwording around Polymorphic Storage which is also cool.)

One thing I asked about often during my time on Technology Drive in 2001-2002 was a smaller starting point for the InServ platform. With the E and F series, they made some steps in that direction, and I bought an E200 for high performance storage at Trulia a few years ago. But with their new 7200 model, they go even farther into the realm of possibility with a starting list price around $25k.

I’ll be bringing you some details on their platform and enhancements in the next week. I’ll also be looking at the comparison between utility computing platforms from HP and Cisco, a topic that was featured in one of the second tier keynotes.

Stay tuned, and wish me luck on the recovery from convention plague if you don’t mind.