These 3 hot new trends in storage will blow your mind! Okay, maybe not quite. (2/2)

I’ve attended a couple of Tech Field Day events, and watched/participated remotely (in both senses of the word) in a few more, and each event seems to embody themes and trends in the field covered. Storage Field Day 5 was no exception.

I found a couple of undercurrents in this event’s presentations, and three of these are worth calling out, both to thank those who are following them, and give a hint to the next generation of new product startups to keep them in mind.

This post is the second of a series of two, for your manageable reading pleasure. The first post is here.

Be sure to check out the full event page, with links to presenters and videos of their presentations, at http://techfieldday.com/event/sfd5/

3. The Progressive Effect: Naming Names Is Great, Calling Names Not So Much

Back at the turn of the century, it was common for vendors to focus on their competition in an unhealthy way. As an example, Auspex (remember them) told me that their competitor’s offering of Gigabit Ethernet was superfluous, and that competitor was going out of business within months. I’ll go out on a limb and say this was a stupid thing to say to a company whose product was a wire-speed Gigabit Ethernet routing switch, and, well, you see how quickly Netapp went out of business, right?

At Storage Field Day 5, a couple of vendors presented competitive/comparative analysis of their market segment. This showed a strong awareness of the technology they were touting, understanding of what choices and tradeoffs have to be made, and why each vendor may have made the choices they did.

Beyond that, it can acknowledge the best use for each product, even if it’s the competition’s product. I’ll call this the Progressive Effect, after the insurance company who shows you the competitor’s pricing even if it’s a better deal. If you think your product is perfect for every customer use case, you don’t know your product or the customer very well.

Once again, Diablo Technologies did a comparison specifically naming the obvious competitor (Fusion-io), and it was clear that this was a forward-looking comparison, as you can order a hundred Fusion-io cards and put them into current industry standard servers. That won’t work with most of the servers in your datacenter with the ULLtraDIMMs just yet. But these are products that are likely to be compared in the foreseeable future, so it was useful context and use cases for both platforms were called out.

Solidfire’s CEO Dave Wright really rocked this topic though, tearing apart (in more of an iFixit manner than an Auspex manner) three hyperconverged solutions including his own, showing the details and decisions and where each one makes sense. I suspect most storage company CEOs wouldn’t get into that deep of a dive on their own product, much less the competition, so it was an impressive experience worth checking out if you haven’t already.

There were some rumblings in the Twittersphere about how knowing your competitor and not hiding them behind “Competitor A” or the like was invoking fear, uncertainty, and doubt (FUD). And while it is a conservative, and acceptable, option not to name a competitor if you have a lot of them–Veeam chose this path in their comparisons, for example–that doesn’t mean that it’s automatically deceptive to give a fair and informed comparison within your competitive market.

If Dave Wright had gone in front of the delegates and told us how bad all the competitors were and why they couldn’t do anything right, we probably would’ve caught up on our email backlogs faster, or asked him to change horses even in mid-stream. If he had dodged or danced around questions about his own company’s platform, some (most?) of us would have been disappointed. Luckily, neither of those happened.

But as it stands, he dug into the tech in an even-handed way, definitely adding value to the presentation and giving some insights that not all of us would have had beforehand. In fact, more than one delegate felt that Solidfire’s comparison gave us the best available information on one particular competitor’s product in that space.

 

 

This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or other Tech Field Day post.

 

 

 

These 3 hot new trends in storage will blow your mind! Okay, maybe not quite. (1/2)

I’ve attended a couple of Tech Field Day events, and watched/participated remotely (in both senses of the word) in a few more, and each event seems to embody themes and trends in the field covered. Storage Field Day 5 was no exception.

I found a couple of undercurrents in this event’s presentations, and three of these are worth calling out, both to thank those who are following them, and give a hint to the next generation of new product startups to keep them in mind.

This post is one of a series of two, for your manageable reading pleasure. Part two is now available here.

Be sure to check out the full event page, with links to presenters and videos of their presentations, at http://techfieldday.com/event/sfd5/

1. Predictability and Sustainability Are The Right Metrics

There are three kinds of falsehoods in tech marketing: lies, damned lies, and benchmarks. Many (most?) vendors will pitch their best case, perfect environment, most advantageous results as a reason to choose them. But as with Teavana’s in-store tasting controversy, when you get the stuff home and try to reproduce the advertised effects, you end up with weak tea. My friend Howard Marks wrote about this in relation to VMware’s 2-million IOP VSAN benchmark recently.

At SFD5, we had a couple of presenters not stress best case/least real results, but predictable and reproducible results. Most applications aren’t going to benefit a lot from a high burst rate and tepid average performance whether it’s on the server hardware, storage back-end, or network. But consistent quality of service (QoS) and a reliable set of expectations that can be met (and maybe exceeded) will lead to satisfied customers and successful implementation.

One example of this was with Diablo Technologies, the folks behind Memory Channel Storage implemented by Sandisk as ULLtraDIMM. In comparing the performance of the MCS flash implementation against a PCIe storage option (Fusion-io’s product, to be precise), they showed performance and I/O results across a range of measurements, and rather than pitching the best results, they touted the sustainable results that you’d expect to see regularly with the product.

Sandisk themselves referred to some configuration options under the hood, not generally available to end users, to trade some lifespan for daily duty cycles. Since these products are not yet mass market on the level of a consumer grade 2.5″ SSD, it makes sense to make that a support/integration option rather than just having users open up a Magician-like product to tweak ULLtraDIMMs themselves.

Another example was Solidfire, who also advocated setting expectations to what would be sustainable. They refer to “guaranteed performance,” which comes down to QoS and sane configuration. Linear scalability

2. Your Three Control Channels Should Be Equivalent

There are generally three ways to control a product, whether it’s a software appliance, a hardware platform, or more. You have a command-line interface (CLI), a graphical user interface (GUI) of some sort–often either a web front-end or an applet/installed application, and an API for automated access (XML, REST, SOAP, sendmail.cf).

I will assert that a good product will have all three of these: CLI, GUI, API. A truly mature product will have full feature equity between the three. Any operation you can execute against the product from one of them can be done with identical effectiveness from the other two.

This seems to be a stronger trend than it was a couple of years ago. At my first Tech Field Day events, as I recall, there were still people who felt a CLI was an afterthought, and an API could be limited. When you’re trying to get your product out the door, before your competitor locks you out of the market, it could be defensible, much as putting off documentation until your product shipped was once defended.

But today, nobody should consider a product ready to ship until it has full management channel equality. And as I recall, most of the vendors we met with who have a manageable product (I’m giving Sandisk and Diablo Tech a pass on this one for obvious reasons) were closer to the “of course we have that” stance than the “why would we need that” that used to be de rigueur in the industry.

Once again, this is part one of two on trends observed at Storage Field Day 5. Part 2 is now available at this link.

This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or any other Tech Field Day post. 

How do you solve a problem like Invicta? PernixData and external high performance cache

PernixData and unconventional flash caching

We spent a captivating two hours at PernixData in San Jose Wednesday. For more general and detailed info on the conversations and related announcements, check out this post by PernixData’s Frank Dennenman on their official blog, and also check out Duncan Epping’s post on YellowBricks.

At a very high and imprecise level, PernixData’s FVP came out last year to provide a caching layer (using flash storage, whether PCI-E or SSD) injected at the vmkernel level on VMware hypervisors. One big development this week was the option to use RAM in place of (or in addition to) flash as a caching layer, but this is unrelated to my thoughts below.

One odd question arose during our conversation with Satyam Vaghani, CTO and co-founder of PernixData. Justin Warren, another delegate, asked the seemingly simple question of whether you could use external flash as cache for a cluster (or clusters) using PernixData’s FVP. Satyam’s answer was a somewhat surprising “yes.”

I thought (once Justin mentioned it) that this was an obvious idea, albeit somewhat niche, and having worked to get scheduled downtime for a hundred servers on several instances in the past year, I could imagine why I might not want to (or be able to) shut down 100 hypervisor blades to install flash into them. If I could put a pile of flash into one or more centrally accessible, high speed/relatively low latency (compared to spinning disk) hosts, or perhaps bring in something like Fusion-io’s Ion Accelerator platform.

I took a bit of ribbing from a couple of other delegates, who didn’t see any situation where this would be useful. You always have plenty of extra spare hypervisor capacity, and flash that can go into those servers, and time and human resources to handle the upgrades, right? If so, I mildly envy you.

So what’s this about Invicta?

Cisco’s UCS Invicta platform (the evolution of WHIPTAIL) is a flash block storage platform based on a Cisco UCS C240-M3S rackmount server with 24 consumer-grade MLC SSD drives. Today its official placement is as a standalone device, managed by Cisco UCS Director, serving FC to UCS servers. The party line is that using it with any other platform or infrastructure is off-label.

I’ve watched a couple of presentations on the Invicta play. It hasn’t yet been clear how Cisco sees it playing against similar products in the market (i.e. Fusion-io Ion Accelerator). When I asked on a couple of occasions on public presentations, the comparison was reduced to Fusion-io ioScale/ioDrive PCIe cards, which is neither a fair, nor an applicable, comparison. You wouldn’t compare Coho Data arrays to single SSD enclosures. So for a month or so I’ve been stuck with the logical progression:

  1. Flash is fast
  2. ???
  3. Buy UCS and Invicta

Last month, word came out that Cisco was selling Invicta arrays against Pure Storage and EMC XtremIO, for heterogeneous environments, which also seems similar to the market for Ion Accelerator. Maybe I called it in the air. Who knows? The platform finally made sense in the present though.

Two great tastes that taste great together?

Wednesday afternoon I started putting the pieces together. Today you can serve up an Invicta appliance as block storage, and probably (I haven’t validated this) access it from a host or hosts running PernixData’s FVP. You’re either dealing with FC or possibly iSCSI. It will serve as well as the competing flash appliances.

But when Cisco gets Invicta integrated into the UCS infrastructure, hopefully with native support for iSCSI and FCoE traffic, you’ll be talking about 10 gigabit connections within the Fabric Interconnect for cache access. You’ll be benefiting from the built-in redundancy, virtual interface mapping and pinning, and control from UCS Manager/UCS Central. You’re keeping your cache within a rack or pod. And if you need to expand the cache you won’t need to open up any of your servers or take them down. You’d be able to put another Invicta system in, map it in, and use it just as the first one is being used.

If you’re not in a Cisco UCS environment, it looks like you could still use Invicta arrays, or Fusion-io, or other pure flash players (even something like a whitebox or channel partner Nexenta array, at least for proof-of-concept).

So where do we go from here?

The pure UCS integration for Invicta is obviously on the long-term roadmap, and hopefully the business units involved see the benefits of true integration at the FI level and move that forward soon.

I’m hoping to get my hands on a trial of FVP, one way or another, and possibly build a small flash appliance in my lab as well as putting some SSDs in my C6100 hypervisor boxes.

It would be interesting to compare the benefits of the internal vs external flash integration, with a conventional 10GBE (non-converged) network. This could provide some insight into a mid-market bolt-on solution, and give some further enlightenment on when and why you might take this option over internal flash. I know that I won’t be able to put a PCIe flash card into my C6100s, unless I give up 10GBE (one PCIe slot per server, darn). Although with FVP’s newly-announced network compression, that might be viable.

What are your thoughts on external server-side cache? Do you think something like this would be useful in an environment you’ve worked with? Feel free to chime in on the comments section below.

This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or other Tech Field Day post. I am a Cisco Champion but all Cisco information below is public knowledge and was received in public channels.

Some upcoming events worth a look

I haven’t been to my datacenter in over six months. I have a feeling the front desk folks at the Westin Casuarina are missing me by now. But I’m still on the move. Hopefully I’ll see some of you at one of the following events in the near future. 

VMworld US 2013

& Tech Field Day Roundtables at VMworld

This year’s VMworld is in San Francisco, just a 90-180 minute commute (each way) from where I live in Silicon Valley. Thanks to the gracious support of Gestalt IT’s Tech Field Day and the Tech Field Day Roundtable at VMworld sponsors, I’ll be camping in San Francisco and making the most of the opportunities during the week. 

Along with a dozen and a half other Tech Field Day delegates, I’ll be meeting with our friends from Asigra, Commvault, Infinio, and Simplivity. I’ve been to TFD sessions with all but Simplivity, but I’ve met Gabriel Chapman (@bacon_is_king) at the SV VMUG so they’re not strangers to me either (even if their “cube” is actually not cubical). 

In addition to the vExpert and VMware customer events, I’ll also be visiting friends from past Tech Field Day meetings, including Scale Computing, Nutanix, Zerto, Pure Storage, and Tintri. If I’ve missed anyone, feel free to touch base. 

Software Defined Data Center Symposium

Gestalt IT is hosting a full day SDDC symposium at Techmart in Santa Clara, a mere 10-15 minute commute for me. There’s still room to join us on Tuesday, September 10th, for a day of discussions about SDDC topics, featuring Greg Ferro, Jim Duffy, Ivan Peplnjak, and several leading vendors in the field. The event will set you back a mere $25 and that includes lunch. 

The Cloudera Sessions

This one actually has nothing to do with Gestalt IT, but if you’re deep into Hadoop, and Cloudera’s particular flavor of it, it’s definitely worth a visit. Cloudera hosts The Cloudera Sessions in cities around the United States, and I’ll be attending the San Francisco event on September 11th.

Several Cloudera technologists, from the system engineering manager to the co-founder/CTO will be talking about where the company is going and where Hadoop is going in the foreseeable future. This event will set you back $149, but if you are a current Cloudera customer, check with your account manager to see if you can get a discount. 

BayLISA At Joyent

The October 17 meeting of BayLISA, Silicon Valley and the San Francisco Bay Area’s oldest system administration group will be held in San Francisco at the headquarters of one of the most prominent Solaris technology companies, Joyent. We’re looking forward to hearing from Brendan Gregg about his new book, Systems Performance: Enterprise and the Cloud, as well as getting an update on Joyent’s Manta storage service.

Attendance is free, but space is limited. RSVP at the BayLISA Meetup site if you’re interested. 

IEEE Computer Society’s Rock Stars Of Big Data

As much as I hate the use of the term “rock stars” (since that’s not necessarily a compliment or a good thing), this event looks interesting. I’m not sure how useful it will be for technologists, but it’s worth a look. IEEE Computer Society is hosting their Rock Stars Of Big Data event at the Computer History Museum in Mountain View on October 29th. It will set you back $239 as an IEEECS member, or $299 without membership. Group discounts are available for registration of 3 or more people on one ticket. 

Mickey’s Not So Scary Halloween Party

Everyone deserves a bit of a break, and big data can wear a technologist out…. If you’re planning to be at the Magic Kingdom between September 10 and November 1, you should check out the Mickey’s Not So Scary Halloween Party. I went two years ago and it was pretty enjoyable. I do work for the Mouse, but I don’t get any benefit if you go. So I highly recommend it. 

 

 

How do you download storage performance? A look at Infinio Accelerator

Many of you joined us (virtually) at Tech Field Day 9 back in June for the world premiere presentation of Infinio and their “downloadable storage performance” for VMware environments.

In the month and a half since we met Infinio, I’ve been planning to write about their presentation and their product. It’s an interesting technology, and something I can see being useful in small and large environments, but I hadn’t gotten around to piling the thoughts into the blog.

I did find that I was bringing them up in conversation more often than I do most Tech Field Day presenters (with the possible exception of Xangati). Whether I was talking to the CEO of a network optimization startup here in Silly Valley, or a sales VP for a well-established storage virtualization player at the Nth Symposium, or a couple of others, I found myself saying the same things. “Have you heard of Infinio? They just made a splash onto the scene at Tech Field Day 9. You should check them out.”

What is an Infinio?

201306 TFD9 Infinio 01 Peter Smith Model

Peter Smith, Director of Product for Infinio, introducing the “Infinio Way” of deploying the Accelerator

Infinio is a two year old, 30ish-person startup whose Accelerator product is designed to be an easy drop-in to your VMware environment. They’re focusing on making the product easy to try (including substantial engineering focus on the installation process), simple and affordable to buy, and visibly useful to your environment as soon as possible.

CEO Arun Agarwal talked up the focus on the installation process, but even more interesting was his focus on the trial and sales model. This seemed important at the time, but as time passed, I really appreciated the idea more.

Just this past week, I downloaded a “free” VM from a much larger company, only to be told in a pushy followup email that I need to provide a phone number and mailing address and get trial licenses and talk to a sales guy on the phone to do anything with the “free” VM. It was annoying enough to get to this point, and I’m disinclined to actually buy and use that product.

I want a company to provide (1) enough information on their website for me to understand the product, (2) a hands-off model for acquiring and trying out the product (even if it’s at 2am on a Saturday because I can’t sleep and I’ve got a hundred servers sitting idle in a datacenter to play with), (3) smart and non-pushy people to help me with understanding, evaluating, and maybe buying the product if I do decide to move forward–when and if I need them, and not the other way around, and (4) a product that really solves the problem.

Infinio plans to provide all these things. You can download the trial without giving a lot of information (or any, as I recall), and you can buy your licenses with a credit card on the site. This would be a refreshing model, and I’m optimistic about their being able to do it.

So what are they doing?

I was wondering that too… and seeing the phrase “downloadable storage performance” a week or so before the visit, I was dubious.

201306 TFD9 Infinio 02 Peter Smith DashboardThe Infinio Accelerator is a virtual NAS (NFS v3) accelerator for VMware. It sits between the vmkernel interface and the storage server on each host, providing a shared, deduplicated caching layer to improve performance across your systems. It also works transparently to both storage and server, so you don’t change your storage settings or ACLs (great for those of us who have siloed storage, networking, and virtualization management teams, and all the efficiencies they provide).

And possibly most impressive of all, you don’t have to reboot anything to install or remove the product.

The management console allows you to toggle acceleration on each datastore, and more importantly, monitor the performance and benefit you’re getting from the accelerator. They call out improvements in response time, request offload, and saved bandwidth to storage.

Let’s make this happen

201306 TFD9 Infinio 03 Peter Smith Improvement

It does make a difference.

Peter Smith demonstrated the Infinio Accelerator for us live, from downloading the installer from the Infinio home page (coming soon) to seeing it make a difference. The process, with questions and distractions included, came in around half an hour.

You download a ~28MB installer, and the installer will pull down about a CD’s worth of VM templates (the Accelerator and the management VM) while you go through the configuration process. (You can apparently work around this download if you need to for network/security reasons–this would be a good opportunity to enlist those smart and non-pushy people mentioned above.)

After the relatively brief installation (faster than checking for updates on a fresh Windows 7 installation, not including downloading and installing all 150 of them, mind you), Peter brought up a workload test with several parallel Linux kernel builds in 8 VMs, demonstrating a 4x speedup with the Accelerator in place even with the memory per VM halved to make room for the Accelerator.

201306 TFD9 Infinio 04 Peter Smith vTARDIS

vTARDIS, MacPro flavor

An aside about making room for Infinio: The accelerator will eat 8GB of RAM, 2 vCPUs, and 15GB of local disk space on each hypervisor host you’re accelerating. It will also use 4GB RAM, 2 vCPUs, and 20GB of storage for the management VM, on one of your hosts. So if your virtualization lab is running on your 8GB laptop, you’re gonna have a bad time, but a quad-core lab system with 32GB of RAM should be practical for testing. A typical production hypervisor host (128GB or more) will probably not notice the loss.

And a further aside about the demo system. As a big fan of Simon Gallagher’s vTARDIS concept of nesting hypervisors, I was pleased to see that the Mac Pro the Infinio folks rolled in for the demo was effectively a vTARDIS in itself. This is a pretty cool way to protect your live demo from the randomness of Internet and VPN connectivities and the very real risk that someone will turn your lab back at the home office into a demo for someone else, if your product lends itself to being demonstrated this way.

Some future-proofing considerations

The team at Infinio were very open to the suggestions that came up during the talk.

They have a “brag bar” that offers the chance to tweet your resource savings, but they understood why some companies might not want that option to be there. Some of us work (or have worked) in environments where releasing infrastructure and performance info without running the gauntlet of PR and legal teams could get us punished and/or fired.

They took suggestions of external integration and external access to the product’s information too, from being able to monitor and report on the Accelerator’s performance in another dashboard, to being able to work with the Accelerator from vCops. And they’re working on multi-hypervisor (read: Hyper-V) support and acceleration of block storage. Just takes enough beer and prioritization, we were told. 

So where do we go from here?

Infinio is releasing a more public beta of the Accelerator at VMworld in San Francisco in just a couple of weeks. Stop by and see them if you’re at VMworld, or watch their website for more details about the easy-to-use trial. You can sign up to be notified about the beta release, or just watch for more details near the end of August.

The pricing will be per-socket, with 1 year of support included[1], and hopefully it will be practical for smaller environments as well as large ones. We will see pricing when the product goes to GA later this year.

I’m planning to get the beta to try out in my new lab environment, so stay tuned for news on that when it happens.

And if you’re one of the lucky ones to get a ticket for #CXIParty, you can thank the folks from Infinio there for sponsoring this event as well. And I may see you there.

Disclosure: Infinio were among the presenters/sponsors of Tech Field Day 9, to which I was a delegate in June 2013. While they and other sponsors provided for my travel and other expenses to attend TFD9, there was no assumption or requirement that I write about them, nor was any compensation offered or received in return for this or any other coverage of TFD9 sponsors/presenters.

Some other write-ups from TFD9 Delegates (if I missed yours, let me know and I’ll be happy to add it):

[1] Update: When we talked with Infinio in June, they planned to include 3 years of support with the initial purchase. They are now planning to include 1 year with renewals beyond that being a separate item. This should make the initial purchase more economical, and make budgeting easier as well.