These 3 hot new trends in storage will blow your mind! Okay, maybe not quite. (2/2)

I’ve attended a couple of Tech Field Day events, and watched/participated remotely (in both senses of the word) in a few more, and each event seems to embody themes and trends in the field covered. Storage Field Day 5 was no exception.

I found a couple of undercurrents in this event’s presentations, and three of these are worth calling out, both to thank those who are following them, and give a hint to the next generation of new product startups to keep them in mind.

This post is the second of a series of two, for your manageable reading pleasure. The first post is here.

Be sure to check out the full event page, with links to presenters and videos of their presentations, at http://techfieldday.com/event/sfd5/

3. The Progressive Effect: Naming Names Is Great, Calling Names Not So Much

Back at the turn of the century, it was common for vendors to focus on their competition in an unhealthy way. As an example, Auspex (remember them) told me that their competitor’s offering of Gigabit Ethernet was superfluous, and that competitor was going out of business within months. I’ll go out on a limb and say this was a stupid thing to say to a company whose product was a wire-speed Gigabit Ethernet routing switch, and, well, you see how quickly Netapp went out of business, right?

At Storage Field Day 5, a couple of vendors presented competitive/comparative analysis of their market segment. This showed a strong awareness of the technology they were touting, understanding of what choices and tradeoffs have to be made, and why each vendor may have made the choices they did.

Beyond that, it can acknowledge the best use for each product, even if it’s the competition’s product. I’ll call this the Progressive Effect, after the insurance company who shows you the competitor’s pricing even if it’s a better deal. If you think your product is perfect for every customer use case, you don’t know your product or the customer very well.

Once again, Diablo Technologies did a comparison specifically naming the obvious competitor (Fusion-io), and it was clear that this was a forward-looking comparison, as you can order a hundred Fusion-io cards and put them into current industry standard servers. That won’t work with most of the servers in your datacenter with the ULLtraDIMMs just yet. But these are products that are likely to be compared in the foreseeable future, so it was useful context and use cases for both platforms were called out.

Solidfire’s CEO Dave Wright really rocked this topic though, tearing apart (in more of an iFixit manner than an Auspex manner) three hyperconverged solutions including his own, showing the details and decisions and where each one makes sense. I suspect most storage company CEOs wouldn’t get into that deep of a dive on their own product, much less the competition, so it was an impressive experience worth checking out if you haven’t already.

There were some rumblings in the Twittersphere about how knowing your competitor and not hiding them behind “Competitor A” or the like was invoking fear, uncertainty, and doubt (FUD). And while it is a conservative, and acceptable, option not to name a competitor if you have a lot of them–Veeam chose this path in their comparisons, for example–that doesn’t mean that it’s automatically deceptive to give a fair and informed comparison within your competitive market.

If Dave Wright had gone in front of the delegates and told us how bad all the competitors were and why they couldn’t do anything right, we probably would’ve caught up on our email backlogs faster, or asked him to change horses even in mid-stream. If he had dodged or danced around questions about his own company’s platform, some (most?) of us would have been disappointed. Luckily, neither of those happened.

But as it stands, he dug into the tech in an even-handed way, definitely adding value to the presentation and giving some insights that not all of us would have had beforehand. In fact, more than one delegate felt that Solidfire’s comparison gave us the best available information on one particular competitor’s product in that space.

 

 

This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or other Tech Field Day post.

 

 

 

FirmwareGate and FCoEgate two months later

I was surprised last week at Interop to hear people still talking about both FCoEgate and HP FirmwareGate. It seems that in the absence of any clarity or resolution, both still bother many in the industry.

For those of you who missed the early February drama (and my relevant blog post):

FCoE-gate

FCoEgate: An analyst group called The Evaluator Group released a “seriously flawed” competitive comparison between an HP/Brocade/FC environment and a Cisco/FCoE environment. Several technical inquiries were answered with confusing evidence that the testers didn’t really know what they were doing.

Several people I talked to at Interop mentioned that this was a perfectly understandable mistake for a newbie analyst, but experienced analysts should have known better. Brocade should have known better as well, but I believe they still stand by the story.

The take-home from this effort is that if you don’t know how to configure a product or technology, and you don’t know how it works, it may not perform optimally in comparison to the one you’re being paid to show off.

This one doesn’t affect me as much personally, but I’ll note that there doesn’t seem to have been a clear resolution of the flaws in this report. Brocade has no reason to pay Evaluator Group to redo a valid comparison, and technologists worth their salt would see through it anyway (as many have). So we have to count on that latter part.

FirmwareGate

FirmwareGate: HP’s server division announced that, for the good of their “Customers For Life,” they would stop making server firmware available unless it was “safety and security” updates. How can you tell if it’s “safety and security?” Try to download it.

HP claimed repeatedly that this brings them in line with “industry best practices,” thus defining their “industry” as consisting exclusively of HP and Oracle. I don’t know any working technologists who would go along with that definition.

HP promised clarification on this, and defended their policy change by declaring industry standard x86/x64 servers as equivalent to commercial operating system releases and Cisco routers.

They even had a conversation with my friend John Obeto, wherein they convinced him that nothing had changed. Ah, if only this were true. (It isn’t.)

But I had fleeting faith that maybe they’d fixed the problem. So I went to get the firmware update for a nearly 2-year-old Microserver N40L, which had a critical firmware bug keeping it from installing a couple of current OSes. Turns out it’s not a “safety and security” fix, and my system apparently came with a one year warranty.

So if I wanted to run a current Windows OS, I either have to spend more on the support contract than I did on the server (if I can find the support contract anymore), or go with an aftermarket third party reverse-engineered firmware (which, unlike HP’s offerings actually enhances functionality and adds value).

Or I can go with the option that I suspect I and many other hobbyists, home lab users, influencers, and recommenders will — simply purchase servers by companies that respect their customers.

What should HP be doing instead?

The “industry best practices” HP should be subscribing to include open access to industry standard server firmware that fixes bugs they delivered, not just vaguely declared “safety and security” upgrades, much as every other industry standard server vendor except Oracle does. That includes Dell, Cisco, Supermicro, Fujitsu, NEC, Lenovo/IBM, and probably a number of other smaller players.

As my friend Howard Marks noted, some of us would be satisfied with a software-only or firmware-only support contract. On-site hardware maintenance isn’t necessary or even affordable for many of us. Many of us who buy used servers would be better off buying an extra server for parts, and most of us buying used servers know how to replace a part or swap out a server. Some of us even better than the vendor’s field engineers.

HP has been silent on this matter for over a month now, as far as I can tell. The “Master Technologists” from HP who won’t distinguish an MDS router from an x86 server have gone silent. And I’m sure many of the “customers for life” that the 30-year HP veteran graciously invites to keep buying support contracts will start looking around if there’s not a critical feature in HP servers that they need.

So where do we go from here?

I can no longer advocate HP servers for people with budgets containing fewer than 2 commas, and even for those I’d suggest thinking about what’s next. There are analogous or better options out there from Dell, Cisco, Supermicro, Fujitsu, NEC, Lenovo, and for the smaller lab form factors, Intel, Gigabyte, Shuttle, and others. (It’s also worth noting that most of those also provide fully functional remote management without an extra license cost as well.)

If you do want to go with HP, or if you can’t replace your current homelab investment, there are ways to find firmware out there (as there has been in the past for Sun^wOracle Solaris). It took me about 15 minutes to find the newly-locked-down Microserver firmware, for example. It didn’t even require a torrent. I can’t advocate that path, as there may be legal, ethical, and safety concerns, but it might be better than going without, at least until you can replace your servers.

And I’ve replaced most of my HP servers in the lab with Dell servers. One more to go. If anyone wants to buy a couple of orphaned DL servers in Silicon Valley (maybe for parts), contact me.

If anyone else has seen any clarity or correction in the state of FCoEgate or FirmwareGate in the last month or so, let me know in the comments. I’d love to be wrong.

A Context For Cloud From Within And Without

Cloud Connect Summit is co-located with Interop this week in Las Vegas, Nevada. This is part of a series of highlights from my experience here. Disclaimers where applicable will follow the commentary. Check interop.com for presentation materials if available.

Update: Adrian Cockcroft’s slides are available at Powered by Battery.

I usually don’t give a lot of focus to keynotes, because I have conference-strophobia or something like that. A room with thousands of people in it is rather uncomfortable for me. And so are buzzwords.

However, Cloud Connect opened with one speaker I know and have spoken with before, another whose business I am familiar with, and a third guy who I didn’t know, but had to assume either did something wrong in a past conference, or is on par with the first two speakers.

Adrian Cockcroft probably needs no introduction.

Mark Thiele is a well-known figure in the datacenter and colocation world. He is currently executive VP and evangelist of datacenter technology for Switch, known for their SUPERNAP datacenters here in Las Vegas and elsewhere.

And the poor guy who got stuck between them… Chris Wolf is CTO Americas of VMware.

Okay, maybe Adrian deserves an introduction

It’s no surprise that Adrian Cockcroft focused on implementing and migrating to cloud. If you’ve seen him speak in the past 4 years it’s probably been about what Netflix was going to do, was doing, or has already done in their migration to an entirely-off-premises cloud-based solution (AWS). He’s now in the venture capital world with Battery Ventures, guiding other companies to do things similar to what Netflix did.

I first met Adrian at a BayLISA meeting in 2009.  I’d been a fan from his Sun legacy; as author of *the* Sun Performance and Tuning book in the 90s, you would be hard pressed to find a Solaris admin who hadn’t read the book, along with Brian Wong’s Configuration and Capacity Planning book. In 2009, he talked about dynamically spinning up and down AWS instances for testing and scaling–it was an uncommon idea at the time, but nowadays few would imagine an environment that didn’t work that way (other than storage-heavy/archival environments). I had a long ad-hoc chat with him at the last free Devops Days event in Sunnyvale, where he predicted the SSD offerings for AWS a couple of months before they happened.

As most of my readers already know, Netflix has had to build their own tools to handle, manage, and test their cloud infrastructure. With a goal to have no dependencies on any given host, service, availability zone, or (someday) provider, you have to think about things differently, and vendor-specific tools and generic open source products don’t always fit. The result is generally known as NetflixOSS, and is available on Github and the usual places.

When Adrian asked who in the room was using Netflix’s OSS offerings, somewhere between a third and half of the attendees raised their hands. Fairly impressive for a movement that just four years ago brought responses of “there’s no way that could work, you’ll be back in datacenters in months.”

One key point he made was that if you’re deploying into a cloud environment, you want to be a small fish in a big pond, not a shark in a small pond. Netflix had to cope with the issues of being that shark for some time; if you are the largest user of a product you will likely have a higher number of issues that aren’t “oh we fixed that last year” but more “oh, that shouldn’t have happened.” Smaller fish tend to get the benefits of collective experience without having to be guinea pigs as much.

I’ve felt the pain of this in a couple of environments in the past, and I’m not even all that much of a bleeding edge implementer. It’s just that when you do something bigger than most people, the odds of adventure are in your favor.

The Good, The Bad, and The Ugly

The talk was called “The Good, The Bad, and The Ugly,” taking into consideration the big cloud announcements from Amazon’s AWS and Google Cloud Platform. There is plenty of coverage of these announcements elsewhere (I’ll link as I find other coverage of Monday’s comparison), but in short, there are improvements, glaring omissions, and a substantial lack of interoperability/exchange standards.

One item from the GB&U talk that I will call out is Microsoft Azure, which has graduated from “Other” to its own slide.

Azure’s greatest strength and greatest weakness is that it focuses almost entirely on the Windows platforms. Most companies, however, are apparently not moving *to* Windows, but away from it, if they are making a substantial migration at all. Linux is the lay of the land in large scale virtual hosting, and to be a universal provider, an IaaS/PaaS platform has to handle the majority platform as well as the #2 platform.

The unicorn in the cloud room is likely to be interchangeability between cloud providers. There are solutions for resilience within Amazon or within Google platforms, but it’s not so easy to run workloads across providers without some major bandaids and crutches. So far.

Time for Q&A: SLAs and where Cloud still doesn’t fit

Two questions were presented in this section of the opening keynote.

The first question was around service level agreements (SLAs). A tradition in hosted services, server platforms, network providers, etc… you don’t see SLAs offered in cloud platforms very often. You might think there were guarantees, based on the ruckus raised by single-availability-zone site owners during AWS outages over the past 2-3 years, but the key to making AWS (or other platforms) work is pretty much what Netflix has spent the last few years doing–making the service work around any outage it can.

This isn’t easy, or it would’ve been done years ago and we wouldn’t be talking about it. And my interpretation of Adrian’s response is that we shouldn’t expect to see them anytime soon. He noted that the underlying hardware is no less reliable than the servers you buy for your physical datacenter. And if you’re doing it right, you can lose servers, networks, entire time zones… and other than some degradation and loss of redundancy, your customers won’t notice.

The second question was heralded by Bernard Golden of enStratius Networks thusly, I believe:

I’ve taken to asking companies and tech advocates where their solutions don’t fit… because there is no universal business adapter (virtual or otherwise), and it’s important to have a sense of context and proportion when considering anything technological. If someone says their product fits everywhere, they don’t know their product or their environment (or either). 

Adrian called out two cases where you may not be able to move to a public cloud: Capacity/scale, and compliance-sensitive environments.

Capacity and scale goes back to the shark in a small pond conundrum. Companies on the scale of Google and Facebook don’t have the option to outsource a lot of their services, as there aren’t any providers able to handle that volume. But even a smaller company might find it impractical to move their data and processing environment outside their datacenter, depending on the amount and persistence of storage, along with other factors. If you’ve ever tried to move several petabytes even between datacenters, you’ll know the pain that arises in this situation (either time, technological complexity, cost, or even all three).

Compliance issues are a bit easier to deal with–only slightly, mind you. As Adrian mentioned, they’re having to train auditors and regulators to understand cloud contexts, and as that process continues, people will find it easier to meet regulatory requirements (whether PCI, HIPAA, 404, or others) using current-decade technological constructs.

So where do we go from here?

My take: Cloud may be ubiquitous, but it’s not perfect (anyone who tells you otherwise is trying to sell you something you don’t need). As regulatory settings catch up to technology, and as cloud service providers realize there’s room for more than one in the market, we’ll hopefully see more interoperability, consistent features across providers, and a world where performance and service are the differentiating factors.

Also, there is still technological life outside the cloud. And once again, anyone who tells you otherwise is trying to sell you a left-handed laser spanner. For the foreseeable future, even the cloud runs on hardware, and some workloads and data pipelines still warrant an on-premises solution. You can (and should) still apply the magic wands of automation and instrumentation to physical environments.

Disclaimers:

I am attending Interop on a media/blogger pass, thanks to the support of UBM and Tech Field Day. Other than the complimentary media pass, I am attending at my own expense and under my own auspices. No consideration has been provided by any speakers, sponsors, or vendors in return for coverage. .

ABOUT INTEROP®

Interop® is the leading independent technology conference and expo series designed to inform and inspire the world’s IT community. Part of UBM Tech’s family of global brands, Interop® drives the adoption of technology, providing knowledge and insight to help IT and corporate decision-makers achieve business success. Through in-depth educational programs, workshops, real-world demonstrations and live technology implementations in its unique InteropNet program, Interopprovides the forum for the most powerful innovations and solutions the industry has to offer. Interop Las Vegas is the flagship event held each spring, with Interop New York held each fall, with annual international events in India, London and Tokyo, all produced by UBM Tech and partners. For more information about these events visit www.interop.com.

DON’T PANIC: The new era of Gallifrey One?

Since a number of my blog followers are Doctor Who fans who might be interested, I’m doing a rare cross-post from #gallyhelp.

The Secret Guide To #gally1 (formerly #gally)

tl;dr: Gallifrey One 2015 tickets are all sold out. Ticket transfers open in October, at face value. There is no shortage of hotel rooms. Gallifrey One will not be expanding. And what’s with the kidneys?

g26sidebarlogo2[1]

Today at 10am Pacific time, Gallifrey One sold 3200 tickets to the 2015 convention in seventy-five minutes.

Think about that, I’ll give you a moment.

DISCLAIMER: While you’re pondering, I’ll remind you that this is an unofficial site not affiliated with Gallifrey One, and I’m just a guy who’s been to six Gallifrey One conventions and likes to try to help folks who are attending or want to attend. 

That’s 7/10 of a ticket every second on average. Let’s round up. One ticket a second, for a show…

View original post 1,040 more words

Khaaaaaaaaan! And Cisco Live Scheduler coming soon!

cumberkhan

Khan (2259)

I know, I’m sure he’s never heard that before…

For those of you coming to Cisco Live US in San Francisco this May, prepare to hear from Sal Khan in the guest keynote on Thursday morning, May 22.

Khan is the founder of Khan Academy, one of the earliest and best-known MOOC (massive open online course) environments… wait a sec. who put that picture over there?

Photo - Sal

Sal Khan, 21st Century

Click click click. That’s better.

Sal Khan wrote “One World Schoolhouse: Education Reimagined,” on the use of technology to improve education, based on his personal history and the development of Khan Academy.

His presence at Cisco Live should give us a different perspective on the real-world application of technology, and underscore the importance of bridging the technology gap around the world.

So where do we go from here?

Have you already registered for Cisco Live US in San Francisco? If so, this Thursday, February 27, you can now go into the session scheduler and start signing up for sessions and blocking out time for the keynotes (including Sal Khan, and probably John Chambers as well). Lots of people already in and reporting happiness over the March 1-2 weekend.

If you’re a Netvet, you got early access to scheduling functions, but if not, you have another two days of read-only access before they open it up to the masses. And you can become a Netvet after you’ve attended three Cisco Live conferences on full passes (IT Management or Full Conference track) in five years, so that’s something to look forward to.

Cisco Live 20140227

If you’re not registered yet, hey, what are you waiting for? There are several options for registration, including the full passes ($2095 through FridayMarch 14, $2295 from then until onsite) which give you access to just about everything depending on whether you are more focused on IT Management or the general Full Conference path.

But if you don’t have the money, the learning credits, or the corporate backing to cover a full pass, there’s still hope. Cisco Live offers a $49 “Explorer” pass, which gives you access to the World of Solutions vendor expo and the daily keynotes, as well as the Social Media Lounge (confirmed!) and the Cisco Live onsite store which offers books, gadgets, and Cisco memorabilia. If you have $595 to spend, go for the Explorer+ pass, which gives you the Explorer benefits plus access to two technical sessions.

Update 2014/02/27: The “Social Event Pass” has been brought to my attention as a good option as well. For $195, you get the receptions and Customer Appreciation Event/party (unlike Explorer/Explorer+), as well as the benefits of the $49 Explorer pass. You don’t get the breakout sessions, but those end up online anyway.

Update 2014/03/03: @CiscoLive on Twitter has advised that the early registration period that originally ended February 28 has been extended until March 14.

The site will be updated soon, but you can get in and save $200 for the next almost-two-weeks!

Protip: Check with your manager or HR/benefits team to see if your company might sponsor your attendance. If not, consider checking with a tax adviser to see if professional development expenses might be tax-deductible in your circumstances. 

You can read about my path to Cisco Live US 2014 in an earlier blog post if you like. A few other Cisco Live attendees have blogged about this year’s event as well. And if you have questions, feel free to ask in the comments below.

And as a disclaimer, if you click on the Cisco Live links above, I get entered in a contest for a free lab or technical session at the event. Other than that, I get no compensation or consideration for this post beyond the warm fuzzies of supporting an event and team I like.