Portable power for your mobile devices, and more to come, from #rsts11travel soon!

As you’re heading into the weekend, you may be leaving home for a day or two, or thinking ahead to upcoming travel and remembering a dead phone or tablet that dented your day on a past trip.

People have the power!

Our rsts11travel blog has two posts you may want to check out to prepare for any of the above.

2017-01-18-16-39-49-battery-packs

Part 1, the cable edition, helps you upgrade your charging adapter and cable collection to handle modern devices.

Part 2, the battery edition, helps you separate from the wall with chargers that may get you 7 or more full charges on your phone, or three full charges on your tablet.

We have product recommendations for various categories, based on what we’ve bought and carried with us to road shows, conferences like Interop and Cisco Live and Strata+Hadoop World, and vacation getaways. Depending on  your shoulders, you might even choose some of these for everyday carry. We do.

So where do we go from here?

Coming up in the next two weeks, probably sooner, will be a two parter on mobile Internet connection handling, with the starter part (Hootoo, Ravpower, and more) on rsts11travel and the advanced part (Cradlepoint, Meraki, and more) here on rsts11.

Probably a couple of weeks past that, the travel side will have a hands-on review of the Invizbox Go travel VPN/TOR router, and over here we’ll try an interesting method for connecting up your Opengear Resilience Gateway. In the mean time, check out our friend John Herbert’s write-up on Opengear’s Remote Site Gateway (ACM7004-5).

Have a safe weekend, and we’ll see you on rsts11 and rsts11travel again soon.

Rolling Your Own NBase-T Network – NBase-T and the Modern Office part 2

We’re continuing our look at NBase-T after several visits with the NBase-T Alliance and some of their partners at Interop in the past couple of years. This post’s focus is going from theory to practice, and what’s on the shelf for the SOHO/POHO deployer in the audience.

If you haven’t read the first part of this article, check out When speeds and feeds really matter from last week.

What is NBase-T and Why Do I Care?

As a quick refresher, NBase-T is a technology standard that allows a range of connectivity rates from 100 megabits to 10 gigabits, specifically introducing support for 2.5 and 5.0 gigabit rates, over Cat5e or Cat6 cabling. With this technology, most enterprises can grow beyond Gigabit Ethernet at typical building cable run distances without upgrading to Cat6A.

What’s on the market today for NBase-T?

As we discussed last time, it’s taken a while for NBase-T gear to become generally available. Wave 2 Wireless-AC has made it more of a need in many environments. But it wasn’t until mid-2016 that you could find a selection of system-level products to fulfill your 2.5/5 gigabit needs. Continue reading

When speeds and feeds really matter – NBase-T and the Modern Office part 1

Welcome back to rsts11. With the conference season on pause for a bit, we’ll be catching up on some coverage from last fall. Look for fresh homelab posts, a couple of device reviews, and more. The who-I-work-for disclosure is at the end of the post. The second part of this topic is at Rolling Your Own NBase-T Network.

What is NBase-T and Why Do I Care?

Before I get into my story, let’s cover a couple of the basics.

NBase-T is a technology standard that allows faster-than-gigabit but not-necessarily-10-gigabit connectivity over Cat5e or Cat6 cabling. The NBase-T Alliance website says “close to 100%” of enterprises run Cat5e or Cat6 as their cabling plant. So with this technology, many to most enterprises can grow beyond Gigabit Ethernet at typical building cable run distances without upgrading to Cat6A. Continue reading

What planet are we on? (The Third) — the RSTS11 Interop preview

Greetings from Fabulous Las Vegas, Nevada. For the third year, with apologies to Men Without Hats, I’m back in the Mandalay Bay Convention Center for Interop. 

This week, I’m actually a man without hats as well. My Big Data Safari hat is in my home office, and my virtual Cisco ears are back at home as well, next to the VPN router that was powered down before I headed for the airport. (Alas, after moving from Disney to Cisco, I lost the theme park discounts and the epic mascot reference.)

What are you up to at Interop this year, Robert?

So why am I back at Interop, when a dozen conference calls a day could have been in the cards for me this week? 

My readers, my fans, and my groupie all know that I’ve been a fan of the Psycho Overkill Home Office (POHO) for quite a while, going back to when I had a 19-server, 5-architecture environment with a 3-vendor network in my spare bedroom. Today it’s about 12 servers, all x64 (Shuttle, Intel, Cisco, Supermicro, Dell, and maybe another secret brand or two), and technically a 5-vendor network, but the idea is similar enough.

And having built a couple of startups up from the under-the-desk model to a scalable, sustainable production-grade infrastructure, the overkill in my home office and labs has led to efficient and effective environments in my workplaces. 

This week I’m taking a break from my usual big data evangelism and the identity aspects of working for a huge multinational juggernaut. It’s a bit of a relief, to be honest; earlier this month I attended my first event in 10 months as a non-booth-babe, and now I’m getting to focus on my more traditional interests. 

What’s on the agenda this week?

I’m looking forward to return visits to the folks at Sandisk, Opengear, and Cradlepoint. Cradlepoint was the first interview I did two years ago at Interop 2013, and I’ve been a customer on my own for many years; Opengear was a presenter at Tech Field Day Extra at Cisco Live 2013; and I last talked with Sandisk at Storage Field Day 5 about a year ago, as well as having been a Fusion-io customer at a previous job. 

I have a couple of other meeting requests out, so we may hear from a couple of other POHO/SOHO/ROBO/lab staples, and I’ll at least be dropping by their booths in the Interop Expo to see what’s new. 

While I’m only recording this week for notetaking convenience, I am starting to ponder what to do about the podcast I’ve been thinking about for a couple of years. So maybe I can pull in some interesting people from time to time… last night’s conversation over Burger Bar shakes with Chris Wahl and Howard Marks probably would have been fodder for several podcasts alone (and I don’t think any of us even had any alcohol!).

And seeing as a number of my friends are presenting this year, including Chris and Howard, I’ll be trying to make my way to their sessions (although there’s a LOT of overlap, and triple-booking isn’t uncommon… there’s a lot more than the Expo floor to experience at Interop, as always).

So where do we go from here?

If you’re at Interop, who are you looking forward to seeing/hearing/heckling/buying drinks for? (And if you’d like to meet up, catch me on Twitter at @gallifreyan.) If not, check out the exhibitor list at interop.com/lasvegas and let me know who you are curious about on that list. 

A Context For Cloud From Within And Without

Cloud Connect Summit is co-located with Interop this week in Las Vegas, Nevada. This is part of a series of highlights from my experience here. Disclaimers where applicable will follow the commentary. Check interop.com for presentation materials if available.

Update: Adrian Cockcroft’s slides are available at Powered by Battery.

I usually don’t give a lot of focus to keynotes, because I have conference-strophobia or something like that. A room with thousands of people in it is rather uncomfortable for me. And so are buzzwords.

However, Cloud Connect opened with one speaker I know and have spoken with before, another whose business I am familiar with, and a third guy who I didn’t know, but had to assume either did something wrong in a past conference, or is on par with the first two speakers.

Adrian Cockcroft probably needs no introduction.

Mark Thiele is a well-known figure in the datacenter and colocation world. He is currently executive VP and evangelist of datacenter technology for Switch, known for their SUPERNAP datacenters here in Las Vegas and elsewhere.

And the poor guy who got stuck between them… Chris Wolf is CTO Americas of VMware.

Okay, maybe Adrian deserves an introduction

It’s no surprise that Adrian Cockcroft focused on implementing and migrating to cloud. If you’ve seen him speak in the past 4 years it’s probably been about what Netflix was going to do, was doing, or has already done in their migration to an entirely-off-premises cloud-based solution (AWS). He’s now in the venture capital world with Battery Ventures, guiding other companies to do things similar to what Netflix did.

I first met Adrian at a BayLISA meeting in 2009.  I’d been a fan from his Sun legacy; as author of *the* Sun Performance and Tuning book in the 90s, you would be hard pressed to find a Solaris admin who hadn’t read the book, along with Brian Wong’s Configuration and Capacity Planning book. In 2009, he talked about dynamically spinning up and down AWS instances for testing and scaling–it was an uncommon idea at the time, but nowadays few would imagine an environment that didn’t work that way (other than storage-heavy/archival environments). I had a long ad-hoc chat with him at the last free Devops Days event in Sunnyvale, where he predicted the SSD offerings for AWS a couple of months before they happened.

As most of my readers already know, Netflix has had to build their own tools to handle, manage, and test their cloud infrastructure. With a goal to have no dependencies on any given host, service, availability zone, or (someday) provider, you have to think about things differently, and vendor-specific tools and generic open source products don’t always fit. The result is generally known as NetflixOSS, and is available on Github and the usual places.

When Adrian asked who in the room was using Netflix’s OSS offerings, somewhere between a third and half of the attendees raised their hands. Fairly impressive for a movement that just four years ago brought responses of “there’s no way that could work, you’ll be back in datacenters in months.”

One key point he made was that if you’re deploying into a cloud environment, you want to be a small fish in a big pond, not a shark in a small pond. Netflix had to cope with the issues of being that shark for some time; if you are the largest user of a product you will likely have a higher number of issues that aren’t “oh we fixed that last year” but more “oh, that shouldn’t have happened.” Smaller fish tend to get the benefits of collective experience without having to be guinea pigs as much.

I’ve felt the pain of this in a couple of environments in the past, and I’m not even all that much of a bleeding edge implementer. It’s just that when you do something bigger than most people, the odds of adventure are in your favor.

The Good, The Bad, and The Ugly

The talk was called “The Good, The Bad, and The Ugly,” taking into consideration the big cloud announcements from Amazon’s AWS and Google Cloud Platform. There is plenty of coverage of these announcements elsewhere (I’ll link as I find other coverage of Monday’s comparison), but in short, there are improvements, glaring omissions, and a substantial lack of interoperability/exchange standards.

One item from the GB&U talk that I will call out is Microsoft Azure, which has graduated from “Other” to its own slide.

Azure’s greatest strength and greatest weakness is that it focuses almost entirely on the Windows platforms. Most companies, however, are apparently not moving *to* Windows, but away from it, if they are making a substantial migration at all. Linux is the lay of the land in large scale virtual hosting, and to be a universal provider, an IaaS/PaaS platform has to handle the majority platform as well as the #2 platform.

The unicorn in the cloud room is likely to be interchangeability between cloud providers. There are solutions for resilience within Amazon or within Google platforms, but it’s not so easy to run workloads across providers without some major bandaids and crutches. So far.

Time for Q&A: SLAs and where Cloud still doesn’t fit

Two questions were presented in this section of the opening keynote.

The first question was around service level agreements (SLAs). A tradition in hosted services, server platforms, network providers, etc… you don’t see SLAs offered in cloud platforms very often. You might think there were guarantees, based on the ruckus raised by single-availability-zone site owners during AWS outages over the past 2-3 years, but the key to making AWS (or other platforms) work is pretty much what Netflix has spent the last few years doing–making the service work around any outage it can.

This isn’t easy, or it would’ve been done years ago and we wouldn’t be talking about it. And my interpretation of Adrian’s response is that we shouldn’t expect to see them anytime soon. He noted that the underlying hardware is no less reliable than the servers you buy for your physical datacenter. And if you’re doing it right, you can lose servers, networks, entire time zones… and other than some degradation and loss of redundancy, your customers won’t notice.

The second question was heralded by Bernard Golden of enStratius Networks thusly, I believe:

I’ve taken to asking companies and tech advocates where their solutions don’t fit… because there is no universal business adapter (virtual or otherwise), and it’s important to have a sense of context and proportion when considering anything technological. If someone says their product fits everywhere, they don’t know their product or their environment (or either). 

Adrian called out two cases where you may not be able to move to a public cloud: Capacity/scale, and compliance-sensitive environments.

Capacity and scale goes back to the shark in a small pond conundrum. Companies on the scale of Google and Facebook don’t have the option to outsource a lot of their services, as there aren’t any providers able to handle that volume. But even a smaller company might find it impractical to move their data and processing environment outside their datacenter, depending on the amount and persistence of storage, along with other factors. If you’ve ever tried to move several petabytes even between datacenters, you’ll know the pain that arises in this situation (either time, technological complexity, cost, or even all three).

Compliance issues are a bit easier to deal with–only slightly, mind you. As Adrian mentioned, they’re having to train auditors and regulators to understand cloud contexts, and as that process continues, people will find it easier to meet regulatory requirements (whether PCI, HIPAA, 404, or others) using current-decade technological constructs.

So where do we go from here?

My take: Cloud may be ubiquitous, but it’s not perfect (anyone who tells you otherwise is trying to sell you something you don’t need). As regulatory settings catch up to technology, and as cloud service providers realize there’s room for more than one in the market, we’ll hopefully see more interoperability, consistent features across providers, and a world where performance and service are the differentiating factors.

Also, there is still technological life outside the cloud. And once again, anyone who tells you otherwise is trying to sell you a left-handed laser spanner. For the foreseeable future, even the cloud runs on hardware, and some workloads and data pipelines still warrant an on-premises solution. You can (and should) still apply the magic wands of automation and instrumentation to physical environments.

Disclaimers:

I am attending Interop on a media/blogger pass, thanks to the support of UBM and Tech Field Day. Other than the complimentary media pass, I am attending at my own expense and under my own auspices. No consideration has been provided by any speakers, sponsors, or vendors in return for coverage. .

ABOUT INTEROP®

Interop® is the leading independent technology conference and expo series designed to inform and inspire the world’s IT community. Part of UBM Tech’s family of global brands, Interop® drives the adoption of technology, providing knowledge and insight to help IT and corporate decision-makers achieve business success. Through in-depth educational programs, workshops, real-world demonstrations and live technology implementations in its unique InteropNet program, Interopprovides the forum for the most powerful innovations and solutions the industry has to offer. Interop Las Vegas is the flagship event held each spring, with Interop New York held each fall, with annual international events in India, London and Tokyo, all produced by UBM Tech and partners. For more information about these events visit www.interop.com.