Live from Interop 2016: Wireless Big Data, #interop4things, and hats #rsts11 #interop #bigdata

2016-05-03 16.56.25For the fourth year, I’m spending a week’s vacation in Las Vegas attending Interop. What started with Stephen Foskett inviting me to participate in a Tech Field Day Roundtable at Interop 2013 has become a tradition, thanks to the courtesy of Interop PR. I’ve experienced all four hotels in the Mandalay strip, learned the secret identity of airline chicken, and met some great people with great projects and products and the occasional interesting vending machines.

YOU CAN’T SEE MY HAT

My coworkers were in large part confused that I would take vacation time, come to Las Vegas mostly on my own dime[1], and act like I don’t have a day job. When I did things like this during my employment with Disney, I’d “leave my ears at home,” as I did for the Tech Field Day events. Cisco doesn’t have iconic ears, and I don’t have a bridge hat (as Teren Bryson suggested yesterday). But I still leave work behind.

You’re not going to be able to leave your experience and expertise behind, of course, and there are enough folks out there who know who I work for, but my work phone, work laptop, work business cards, and work identity[2] are left behind for the duration of an event like this.

Continue reading

These 3 hot new trends in storage will blow your mind! Okay, maybe not quite. (2/2)

I’ve attended a couple of Tech Field Day events, and watched/participated remotely (in both senses of the word) in a few more, and each event seems to embody themes and trends in the field covered. Storage Field Day 5 was no exception.

I found a couple of undercurrents in this event’s presentations, and three of these are worth calling out, both to thank those who are following them, and give a hint to the next generation of new product startups to keep them in mind.

This post is the second of a series of two, for your manageable reading pleasure. The first post is here.

Be sure to check out the full event page, with links to presenters and videos of their presentations, at http://techfieldday.com/event/sfd5/

3. The Progressive Effect: Naming Names Is Great, Calling Names Not So Much

Back at the turn of the century, it was common for vendors to focus on their competition in an unhealthy way. As an example, Auspex (remember them) told me that their competitor’s offering of Gigabit Ethernet was superfluous, and that competitor was going out of business within months. I’ll go out on a limb and say this was a stupid thing to say to a company whose product was a wire-speed Gigabit Ethernet routing switch, and, well, you see how quickly Netapp went out of business, right?

At Storage Field Day 5, a couple of vendors presented competitive/comparative analysis of their market segment. This showed a strong awareness of the technology they were touting, understanding of what choices and tradeoffs have to be made, and why each vendor may have made the choices they did.

Beyond that, it can acknowledge the best use for each product, even if it’s the competition’s product. I’ll call this the Progressive Effect, after the insurance company who shows you the competitor’s pricing even if it’s a better deal. If you think your product is perfect for every customer use case, you don’t know your product or the customer very well.

Once again, Diablo Technologies did a comparison specifically naming the obvious competitor (Fusion-io), and it was clear that this was a forward-looking comparison, as you can order a hundred Fusion-io cards and put them into current industry standard servers. That won’t work with most of the servers in your datacenter with the ULLtraDIMMs just yet. But these are products that are likely to be compared in the foreseeable future, so it was useful context and use cases for both platforms were called out.

Solidfire’s CEO Dave Wright really rocked this topic though, tearing apart (in more of an iFixit manner than an Auspex manner) three hyperconverged solutions including his own, showing the details and decisions and where each one makes sense. I suspect most storage company CEOs wouldn’t get into that deep of a dive on their own product, much less the competition, so it was an impressive experience worth checking out if you haven’t already.

There were some rumblings in the Twittersphere about how knowing your competitor and not hiding them behind “Competitor A” or the like was invoking fear, uncertainty, and doubt (FUD). And while it is a conservative, and acceptable, option not to name a competitor if you have a lot of them–Veeam chose this path in their comparisons, for example–that doesn’t mean that it’s automatically deceptive to give a fair and informed comparison within your competitive market.

If Dave Wright had gone in front of the delegates and told us how bad all the competitors were and why they couldn’t do anything right, we probably would’ve caught up on our email backlogs faster, or asked him to change horses even in mid-stream. If he had dodged or danced around questions about his own company’s platform, some (most?) of us would have been disappointed. Luckily, neither of those happened.

But as it stands, he dug into the tech in an even-handed way, definitely adding value to the presentation and giving some insights that not all of us would have had beforehand. In fact, more than one delegate felt that Solidfire’s comparison gave us the best available information on one particular competitor’s product in that space.

 

 

This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or other Tech Field Day post.

 

 

 

These 3 hot new trends in storage will blow your mind! Okay, maybe not quite. (1/2)

I’ve attended a couple of Tech Field Day events, and watched/participated remotely (in both senses of the word) in a few more, and each event seems to embody themes and trends in the field covered. Storage Field Day 5 was no exception.

I found a couple of undercurrents in this event’s presentations, and three of these are worth calling out, both to thank those who are following them, and give a hint to the next generation of new product startups to keep them in mind.

This post is one of a series of two, for your manageable reading pleasure. Part two is now available here.

Be sure to check out the full event page, with links to presenters and videos of their presentations, at http://techfieldday.com/event/sfd5/

1. Predictability and Sustainability Are The Right Metrics

There are three kinds of falsehoods in tech marketing: lies, damned lies, and benchmarks. Many (most?) vendors will pitch their best case, perfect environment, most advantageous results as a reason to choose them. But as with Teavana’s in-store tasting controversy, when you get the stuff home and try to reproduce the advertised effects, you end up with weak tea. My friend Howard Marks wrote about this in relation to VMware’s 2-million IOP VSAN benchmark recently.

At SFD5, we had a couple of presenters not stress best case/least real results, but predictable and reproducible results. Most applications aren’t going to benefit a lot from a high burst rate and tepid average performance whether it’s on the server hardware, storage back-end, or network. But consistent quality of service (QoS) and a reliable set of expectations that can be met (and maybe exceeded) will lead to satisfied customers and successful implementation.

One example of this was with Diablo Technologies, the folks behind Memory Channel Storage implemented by Sandisk as ULLtraDIMM. In comparing the performance of the MCS flash implementation against a PCIe storage option (Fusion-io’s product, to be precise), they showed performance and I/O results across a range of measurements, and rather than pitching the best results, they touted the sustainable results that you’d expect to see regularly with the product.

Sandisk themselves referred to some configuration options under the hood, not generally available to end users, to trade some lifespan for daily duty cycles. Since these products are not yet mass market on the level of a consumer grade 2.5″ SSD, it makes sense to make that a support/integration option rather than just having users open up a Magician-like product to tweak ULLtraDIMMs themselves.

Another example was Solidfire, who also advocated setting expectations to what would be sustainable. They refer to “guaranteed performance,” which comes down to QoS and sane configuration. Linear scalability

2. Your Three Control Channels Should Be Equivalent

There are generally three ways to control a product, whether it’s a software appliance, a hardware platform, or more. You have a command-line interface (CLI), a graphical user interface (GUI) of some sort–often either a web front-end or an applet/installed application, and an API for automated access (XML, REST, SOAP, sendmail.cf).

I will assert that a good product will have all three of these: CLI, GUI, API. A truly mature product will have full feature equity between the three. Any operation you can execute against the product from one of them can be done with identical effectiveness from the other two.

This seems to be a stronger trend than it was a couple of years ago. At my first Tech Field Day events, as I recall, there were still people who felt a CLI was an afterthought, and an API could be limited. When you’re trying to get your product out the door, before your competitor locks you out of the market, it could be defensible, much as putting off documentation until your product shipped was once defended.

But today, nobody should consider a product ready to ship until it has full management channel equality. And as I recall, most of the vendors we met with who have a manageable product (I’m giving Sandisk and Diablo Tech a pass on this one for obvious reasons) were closer to the “of course we have that” stance than the “why would we need that” that used to be de rigueur in the industry.

Once again, this is part one of two on trends observed at Storage Field Day 5. Part 2 is now available at this link.

This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or any other Tech Field Day post. 

How do you solve a problem like Invicta? PernixData and external high performance cache

PernixData and unconventional flash caching

We spent a captivating two hours at PernixData in San Jose Wednesday. For more general and detailed info on the conversations and related announcements, check out this post by PernixData’s Frank Dennenman on their official blog, and also check out Duncan Epping’s post on YellowBricks.

At a very high and imprecise level, PernixData’s FVP came out last year to provide a caching layer (using flash storage, whether PCI-E or SSD) injected at the vmkernel level on VMware hypervisors. One big development this week was the option to use RAM in place of (or in addition to) flash as a caching layer, but this is unrelated to my thoughts below.

One odd question arose during our conversation with Satyam Vaghani, CTO and co-founder of PernixData. Justin Warren, another delegate, asked the seemingly simple question of whether you could use external flash as cache for a cluster (or clusters) using PernixData’s FVP. Satyam’s answer was a somewhat surprising “yes.”

I thought (once Justin mentioned it) that this was an obvious idea, albeit somewhat niche, and having worked to get scheduled downtime for a hundred servers on several instances in the past year, I could imagine why I might not want to (or be able to) shut down 100 hypervisor blades to install flash into them. If I could put a pile of flash into one or more centrally accessible, high speed/relatively low latency (compared to spinning disk) hosts, or perhaps bring in something like Fusion-io’s Ion Accelerator platform.

I took a bit of ribbing from a couple of other delegates, who didn’t see any situation where this would be useful. You always have plenty of extra spare hypervisor capacity, and flash that can go into those servers, and time and human resources to handle the upgrades, right? If so, I mildly envy you.

So what’s this about Invicta?

Cisco’s UCS Invicta platform (the evolution of WHIPTAIL) is a flash block storage platform based on a Cisco UCS C240-M3S rackmount server with 24 consumer-grade MLC SSD drives. Today its official placement is as a standalone device, managed by Cisco UCS Director, serving FC to UCS servers. The party line is that using it with any other platform or infrastructure is off-label.

I’ve watched a couple of presentations on the Invicta play. It hasn’t yet been clear how Cisco sees it playing against similar products in the market (i.e. Fusion-io Ion Accelerator). When I asked on a couple of occasions on public presentations, the comparison was reduced to Fusion-io ioScale/ioDrive PCIe cards, which is neither a fair, nor an applicable, comparison. You wouldn’t compare Coho Data arrays to single SSD enclosures. So for a month or so I’ve been stuck with the logical progression:

  1. Flash is fast
  2. ???
  3. Buy UCS and Invicta

Last month, word came out that Cisco was selling Invicta arrays against Pure Storage and EMC XtremIO, for heterogeneous environments, which also seems similar to the market for Ion Accelerator. Maybe I called it in the air. Who knows? The platform finally made sense in the present though.

Two great tastes that taste great together?

Wednesday afternoon I started putting the pieces together. Today you can serve up an Invicta appliance as block storage, and probably (I haven’t validated this) access it from a host or hosts running PernixData’s FVP. You’re either dealing with FC or possibly iSCSI. It will serve as well as the competing flash appliances.

But when Cisco gets Invicta integrated into the UCS infrastructure, hopefully with native support for iSCSI and FCoE traffic, you’ll be talking about 10 gigabit connections within the Fabric Interconnect for cache access. You’ll be benefiting from the built-in redundancy, virtual interface mapping and pinning, and control from UCS Manager/UCS Central. You’re keeping your cache within a rack or pod. And if you need to expand the cache you won’t need to open up any of your servers or take them down. You’d be able to put another Invicta system in, map it in, and use it just as the first one is being used.

If you’re not in a Cisco UCS environment, it looks like you could still use Invicta arrays, or Fusion-io, or other pure flash players (even something like a whitebox or channel partner Nexenta array, at least for proof-of-concept).

So where do we go from here?

The pure UCS integration for Invicta is obviously on the long-term roadmap, and hopefully the business units involved see the benefits of true integration at the FI level and move that forward soon.

I’m hoping to get my hands on a trial of FVP, one way or another, and possibly build a small flash appliance in my lab as well as putting some SSDs in my C6100 hypervisor boxes.

It would be interesting to compare the benefits of the internal vs external flash integration, with a conventional 10GBE (non-converged) network. This could provide some insight into a mid-market bolt-on solution, and give some further enlightenment on when and why you might take this option over internal flash. I know that I won’t be able to put a PCIe flash card into my C6100s, unless I give up 10GBE (one PCIe slot per server, darn). Although with FVP’s newly-announced network compression, that might be viable.

What are your thoughts on external server-side cache? Do you think something like this would be useful in an environment you’ve worked with? Feel free to chime in on the comments section below.

This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or other Tech Field Day post. I am a Cisco Champion but all Cisco information below is public knowledge and was received in public channels.

Taking POHO to Interop 2014 – Three Roads To Take

I’m looking forward to returning to Interop Las Vegas in under two weeks. Where has the winter gone? I know, I’m in Northern California, I can’t complain much about the weather.

interop-2014-banner

Click above for conference details, or visit this link for a free expo and keynote pass.

There are three aspects of Interop that I’m looking forward to.

First, I’m looking forward to meeting some Twitterverse friends, and maybe a Twitter-averse friend or two, as well as contacts I’ve made at my conferences last year. I will be dropping in on the Interop HQ and Social Media Command Center to see how the UBM team handles social media on-site. As my friends at @CiscoLive and VMworld know, I find the social media aspect of a conference to be as important as the formal content. Networking and getting advice and answers as you go makes the event more efficient and useful, and it’s always good to say hi to the folks who make it happen. I also hear there are collectible pins, and those of you who know where I work know we’re known for our pins, among other things.

Watch the hashtags #Interop and #CloudConnect and follow @interop for the latest news from the events.

cloud-connect-summit-logoSecond, I’ll be trying to take a bootcamp or two at the Cloud Connect Summit  and come up to speed on some technologies that are newish to me. There’s an AWS Boot Camp presented by Bernard Golden (alas, it’s not hands-on, so I’m not sure I’d call it a boot camp), and an OpenStack Boot Camp that looks promising as well. These may end up just being focus opportunities, or I may change my plans, but they look interesting. And as a guy who’s mostly running bare metal big data on a daily basis, it’ll be good to get some exposure to the virtual side of things outside of VMware.

Third, while I’m attending with my press hat and not my mouse ears, I do work in a sizable technology environment, so I’ll be checking out some larger technology options that may not find their way into my lab but may find their way into my day job.

Highlights in the enterprise space for me (alphabetically): Arista Networks, Cisco, Juniper Networks.

tfd-generalFourth, I’ll be joining the Tech Field Day Roundtables again this year. HP Networking will be presenting at this event, and they tie in with POHO below as well. Also presentingwill be a company rather dear to my heart in a strange way, Avaya. At the turn of the century, I worked for the Ethernet Products Group (or whatever we were called that quarter) at Nortel Networks, and my team’s flagship product was the Nortel Passport 8600 routing switch. Imagine my surprise when I ran across a slightly different color of 8600 (with much newer line cards) at the Interop network last year, now known as the Avaya Ethernet Routing Switch 8600. A couple of my Rapid City/Bay Networks/Nortel Networks coworkers are still at Avaya, or were until fairly recently… so it’s sort of a family thing for me.

If you can’t make it to the roundtables, we usually live-stream the presentations, or have them posted afterward, at TechFieldDay.com. Check it out and track #RILV14 and #TechFieldDay on Twitter for the latest news.

And last, but not least… there’s POHO. The Psycho Overkill Home Office, a gateway to big business functionality on a small business budget, is a topic near and dear to my blog, my budget, and my two home labs. I will be stopping by to speak with several vendors at Interop whose products intersect with the burgeoning (and occasionally bludgeoning) home lab market and the smaller side of the SMB world (I’m taking to calling it the one-comma-budget side of SMB).

Some of the POHO highlights that I’m seeing so far (in alphabetic order) include Chenbro Micom, Cradlepoint, Linksys (now part of Belkin), Memphis Electronic (think 16GB SODIMMs), Monoprice, Opengear, Shuttle Computer Group, Synology, and Xi3.

There are a lot of other names on the exhibitor list who will appeal to anyone, and if you’re going to be there with an exhibitor who you think would be of interest to my POHO audience, feel free to get in touch (I’m on the media list, or contact me through this blog).

And if you noticed that I went down five roads instead of three, give yourself a pat on the back. I should’ve seen that coming.