Cloud Connect Summit is co-located with Interop this week in Las Vegas, Nevada. This is part of a series of highlights from my experience here. Disclaimers where applicable will follow the commentary. Check interop.com for presentation materials if available.
Update: Adrian Cockcroft’s slides are available at Powered by Battery.
I usually don’t give a lot of focus to keynotes, because I have conference-strophobia or something like that. A room with thousands of people in it is rather uncomfortable for me. And so are buzzwords.
However, Cloud Connect opened with one speaker I know and have spoken with before, another whose business I am familiar with, and a third guy who I didn’t know, but had to assume either did something wrong in a past conference, or is on par with the first two speakers.
Adrian Cockcroft probably needs no introduction.
Mark Thiele is a well-known figure in the datacenter and colocation world. He is currently executive VP and evangelist of datacenter technology for Switch, known for their SUPERNAP datacenters here in Las Vegas and elsewhere.
And the poor guy who got stuck between them… Chris Wolf is CTO Americas of VMware.
Okay, maybe Adrian deserves an introduction
It’s no surprise that Adrian Cockcroft focused on implementing and migrating to cloud. If you’ve seen him speak in the past 4 years it’s probably been about what Netflix was going to do, was doing, or has already done in their migration to an entirely-off-premises cloud-based solution (AWS). He’s now in the venture capital world with Battery Ventures, guiding other companies to do things similar to what Netflix did.
I first met Adrian at a BayLISA meeting in 2009. I’d been a fan from his Sun legacy; as author of *the* Sun Performance and Tuning book in the 90s, you would be hard pressed to find a Solaris admin who hadn’t read the book, along with Brian Wong’s Configuration and Capacity Planning book. In 2009, he talked about dynamically spinning up and down AWS instances for testing and scaling–it was an uncommon idea at the time, but nowadays few would imagine an environment that didn’t work that way (other than storage-heavy/archival environments). I had a long ad-hoc chat with him at the last free Devops Days event in Sunnyvale, where he predicted the SSD offerings for AWS a couple of months before they happened.
As most of my readers already know, Netflix has had to build their own tools to handle, manage, and test their cloud infrastructure. With a goal to have no dependencies on any given host, service, availability zone, or (someday) provider, you have to think about things differently, and vendor-specific tools and generic open source products don’t always fit. The result is generally known as NetflixOSS, and is available on Github and the usual places.
When Adrian asked who in the room was using Netflix’s OSS offerings, somewhere between a third and half of the attendees raised their hands. Fairly impressive for a movement that just four years ago brought responses of “there’s no way that could work, you’ll be back in datacenters in months.”
One key point he made was that if you’re deploying into a cloud environment, you want to be a small fish in a big pond, not a shark in a small pond. Netflix had to cope with the issues of being that shark for some time; if you are the largest user of a product you will likely have a higher number of issues that aren’t “oh we fixed that last year” but more “oh, that shouldn’t have happened.” Smaller fish tend to get the benefits of collective experience without having to be guinea pigs as much.
I’ve felt the pain of this in a couple of environments in the past, and I’m not even all that much of a bleeding edge implementer. It’s just that when you do something bigger than most people, the odds of adventure are in your favor.
The Good, The Bad, and The Ugly
The talk was called “The Good, The Bad, and The Ugly,” taking into consideration the big cloud announcements from Amazon’s AWS and Google Cloud Platform. There is plenty of coverage of these announcements elsewhere (I’ll link as I find other coverage of Monday’s comparison), but in short, there are improvements, glaring omissions, and a substantial lack of interoperability/exchange standards.
One item from the GB&U talk that I will call out is Microsoft Azure, which has graduated from “Other” to its own slide.
Azure’s greatest strength and greatest weakness is that it focuses almost entirely on the Windows platforms. Most companies, however, are apparently not moving *to* Windows, but away from it, if they are making a substantial migration at all. Linux is the lay of the land in large scale virtual hosting, and to be a universal provider, an IaaS/PaaS platform has to handle the majority platform as well as the #2 platform.
The unicorn in the cloud room is likely to be interchangeability between cloud providers. There are solutions for resilience within Amazon or within Google platforms, but it’s not so easy to run workloads across providers without some major bandaids and crutches. So far.
Time for Q&A: SLAs and where Cloud still doesn’t fit
Two questions were presented in this section of the opening keynote.
The first question was around service level agreements (SLAs). A tradition in hosted services, server platforms, network providers, etc… you don’t see SLAs offered in cloud platforms very often. You might think there were guarantees, based on the ruckus raised by single-availability-zone site owners during AWS outages over the past 2-3 years, but the key to making AWS (or other platforms) work is pretty much what Netflix has spent the last few years doing–making the service work around any outage it can.
This isn’t easy, or it would’ve been done years ago and we wouldn’t be talking about it. And my interpretation of Adrian’s response is that we shouldn’t expect to see them anytime soon. He noted that the underlying hardware is no less reliable than the servers you buy for your physical datacenter. And if you’re doing it right, you can lose servers, networks, entire time zones… and other than some degradation and loss of redundancy, your customers won’t notice.
The second question was heralded by Bernard Golden of enStratius Networks thusly, I believe:
I’ve taken to asking companies and tech advocates where their solutions don’t fit… because there is no universal business adapter (virtual or otherwise), and it’s important to have a sense of context and proportion when considering anything technological. If someone says their product fits everywhere, they don’t know their product or their environment (or either).
Adrian called out two cases where you may not be able to move to a public cloud: Capacity/scale, and compliance-sensitive environments.
Capacity and scale goes back to the shark in a small pond conundrum. Companies on the scale of Google and Facebook don’t have the option to outsource a lot of their services, as there aren’t any providers able to handle that volume. But even a smaller company might find it impractical to move their data and processing environment outside their datacenter, depending on the amount and persistence of storage, along with other factors. If you’ve ever tried to move several petabytes even between datacenters, you’ll know the pain that arises in this situation (either time, technological complexity, cost, or even all three).
Compliance issues are a bit easier to deal with–only slightly, mind you. As Adrian mentioned, they’re having to train auditors and regulators to understand cloud contexts, and as that process continues, people will find it easier to meet regulatory requirements (whether PCI, HIPAA, 404, or others) using current-decade technological constructs.
So where do we go from here?
My take: Cloud may be ubiquitous, but it’s not perfect (anyone who tells you otherwise is trying to sell you something you don’t need). As regulatory settings catch up to technology, and as cloud service providers realize there’s room for more than one in the market, we’ll hopefully see more interoperability, consistent features across providers, and a world where performance and service are the differentiating factors.
Also, there is still technological life outside the cloud. And once again, anyone who tells you otherwise is trying to sell you a left-handed laser spanner. For the foreseeable future, even the cloud runs on hardware, and some workloads and data pipelines still warrant an on-premises solution. You can (and should) still apply the magic wands of automation and instrumentation to physical environments.
I am attending Interop on a media/blogger pass, thanks to the support of UBM and Tech Field Day. Other than the complimentary media pass, I am attending at my own expense and under my own auspices. No consideration has been provided by any speakers, sponsors, or vendors in return for coverage. .
Interop® is the leading independent technology conference and expo series designed to inform and inspire the world’s IT community. Part of UBM Tech’s family of global brands, Interop® drives the adoption of technology, providing knowledge and insight to help IT and corporate decision-makers achieve business success. Through in-depth educational programs, workshops, real-world demonstrations and live technology implementations in its unique InteropNet program, Interopprovides the forum for the most powerful innovations and solutions the industry has to offer. Interop Las Vegas is the flagship event held each spring, with Interop New York held each fall, with annual international events in India, London and Tokyo, all produced by UBM Tech and partners. For more information about these events visit www.interop.com.
Pingback: Interop Las Vegas 2014 – Highlights, Lowlights, Footlights | rsts11 – Robert Novak on system administration