I’ve been planning to do some network testing and deploy some new storage for VMware vSphere in the home lab. My Synology NAS boxes (DS1513+ and DS1813+) have good performance but are limited to four 1GbE ports each, and my budget won’t allow a 10GbE-capable Synology this spring. [See below for a note on those $499 Synology systems on Amazon.] Continue reading
Most of you know I don’t shy away from building (or refurbishing) my own computers. I used to draw the line at laptops, but in the last couple of years I’ve even rebuilt a few stripped-for-parts Dell and Toshiba laptops for the fun of it. Warped definition of “fun,” I’ll admit.
So when I saw a Facebook ad for a “cloud server” called “antsle,” I was curious but unconvinced. It was something like this:
The idea is you’re buying a compact, fanless, silent microserver that, in addition to some fault-tolerant hardware (mirrored SSD, ECC RAM), includes a proprietary user interface for managing and monitoring containers and virtual machines. You can cram up to 64GB of RAM in there, and while it only holds two internal drives, you can add more via USB 2.0 or USB 3.0, for up to 16TB of officially supported capacity. Not too bad, but I’ve been known to be cheap and/or resourceful, so I priced out a similar configuration assuming I’d build it myself. Continue reading
PernixData and unconventional flash caching
We spent a captivating two hours at PernixData in San Jose Wednesday. For more general and detailed info on the conversations and related announcements, check out this post by PernixData’s Frank Dennenman on their official blog, and also check out Duncan Epping’s post on YellowBricks.
At a very high and imprecise level, PernixData’s FVP came out last year to provide a caching layer (using flash storage, whether PCI-E or SSD) injected at the vmkernel level on VMware hypervisors. One big development this week was the option to use RAM in place of (or in addition to) flash as a caching layer, but this is unrelated to my thoughts below.
One odd question arose during our conversation with Satyam Vaghani, CTO and co-founder of PernixData. Justin Warren, another delegate, asked the seemingly simple question of whether you could use external flash as cache for a cluster (or clusters) using PernixData’s FVP. Satyam’s answer was a somewhat surprising “yes.”
I thought (once Justin mentioned it) that this was an obvious idea, albeit somewhat niche, and having worked to get scheduled downtime for a hundred servers on several instances in the past year, I could imagine why I might not want to (or be able to) shut down 100 hypervisor blades to install flash into them. If I could put a pile of flash into one or more centrally accessible, high speed/relatively low latency (compared to spinning disk) hosts, or perhaps bring in something like Fusion-io’s Ion Accelerator platform.
I took a bit of ribbing from a couple of other delegates, who didn’t see any situation where this would be useful. You always have plenty of extra spare hypervisor capacity, and flash that can go into those servers, and time and human resources to handle the upgrades, right? If so, I mildly envy you.
So what’s this about Invicta?
Cisco’s UCS Invicta platform (the evolution of WHIPTAIL) is a flash block storage platform based on a Cisco UCS C240-M3S rackmount server with 24 consumer-grade MLC SSD drives. Today its official placement is as a standalone device, managed by Cisco UCS Director, serving FC to UCS servers. The party line is that using it with any other platform or infrastructure is off-label.
I’ve watched a couple of presentations on the Invicta play. It hasn’t yet been clear how Cisco sees it playing against similar products in the market (i.e. Fusion-io Ion Accelerator). When I asked on a couple of occasions on public presentations, the comparison was reduced to Fusion-io ioScale/ioDrive PCIe cards, which is neither a fair, nor an applicable, comparison. You wouldn’t compare Coho Data arrays to single SSD enclosures. So for a month or so I’ve been stuck with the logical progression:
- Flash is fast
- Buy UCS and Invicta
Last month, word came out that Cisco was selling Invicta arrays against Pure Storage and EMC XtremIO, for heterogeneous environments, which also seems similar to the market for Ion Accelerator. Maybe I called it in the air. Who knows? The platform finally made sense in the present though.
Two great tastes that taste great together?
Wednesday afternoon I started putting the pieces together. Today you can serve up an Invicta appliance as block storage, and probably (I haven’t validated this) access it from a host or hosts running PernixData’s FVP. You’re either dealing with FC or possibly iSCSI. It will serve as well as the competing flash appliances.
But when Cisco gets Invicta integrated into the UCS infrastructure, hopefully with native support for iSCSI and FCoE traffic, you’ll be talking about 10 gigabit connections within the Fabric Interconnect for cache access. You’ll be benefiting from the built-in redundancy, virtual interface mapping and pinning, and control from UCS Manager/UCS Central. You’re keeping your cache within a rack or pod. And if you need to expand the cache you won’t need to open up any of your servers or take them down. You’d be able to put another Invicta system in, map it in, and use it just as the first one is being used.
If you’re not in a Cisco UCS environment, it looks like you could still use Invicta arrays, or Fusion-io, or other pure flash players (even something like a whitebox or channel partner Nexenta array, at least for proof-of-concept).
So where do we go from here?
The pure UCS integration for Invicta is obviously on the long-term roadmap, and hopefully the business units involved see the benefits of true integration at the FI level and move that forward soon.
I’m hoping to get my hands on a trial of FVP, one way or another, and possibly build a small flash appliance in my lab as well as putting some SSDs in my C6100 hypervisor boxes.
It would be interesting to compare the benefits of the internal vs external flash integration, with a conventional 10GBE (non-converged) network. This could provide some insight into a mid-market bolt-on solution, and give some further enlightenment on when and why you might take this option over internal flash. I know that I won’t be able to put a PCIe flash card into my C6100s, unless I give up 10GBE (one PCIe slot per server, darn). Although with FVP’s newly-announced network compression, that might be viable.
What are your thoughts on external server-side cache? Do you think something like this would be useful in an environment you’ve worked with? Feel free to chime in on the comments section below.
This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or other Tech Field Day post. I am a Cisco Champion but all Cisco information below is public knowledge and was received in public channels.
At VMworld 2013 in San Francisco, there was a lot of buzz around Hyper-V, oddly enough. A few vendors mentioned multi-hypervisor heterogeneous cloud technologies in hushed tones, more than a few attendees bemoaned the very recent death of Microsoft TechNet Subscription offerings, and guess who showed up with a frozen custard truck?
Yep, Microsoft’s server team showed up, rented out and re-skinned a Frozen Kuhsterd food truck, and handed out free frozen custard for a chance to promote and discuss their own virtualization platform and new publicity initiative, branded Virtualization2.
The frozen custard was pretty tasty. Well worth the 3 block walk from Moscone. It was a pretty effective way to get attention and mindshare as well–several people I spoke with were impressed with the marketing novelty and the reminder that VMware isn’t the only player in the game, even if one friend considered it an utter failure due to the insufficient description of frozen custard.
Almost two years ago when I did my Virtualization Field Day experience, the question I asked (and vendors were usually prepared to answer) was “when will you support Xen in addition to VMware?” This year, it’s more “when will you support Hyper-V?” So a lot of people are taking Microsoft seriously in the visualization market these days.
Insert Foot, Pull Trigger
One nominal advantage Microsoft has had over VMware in the last few years is an affordable way for IT professionals to evaluate their offerings for more than two months at a time. But first, some history.
Once upon a time, VMware had a program called the VMTN (VMware Technology Network) Subscription. For about $300 a year, you got extended use licenses for VMware’s products, for non-production use. No 60-day time bomb, no 6-reinstalls-a-year for the home lab, and you can focus on learning and maybe even mastering the technology.
At that point, Microsoft had the advantage in that their TechNet Subscription program gave you a similar option. For about $300/year you could get non-production licenses for most Microsoft products, including servers and virtualization. I would believe that a few people found it easier to test and develop their skills in that environment, rather than in the “oops, it’s an odd month, better reinstall the lab from scratch” environment that VMware provided.
Well, as of today, September 1, the TechNet Subscription is no more. If you signed up or renewed by the end of August 31, you get one more year and then your licenses are no longer valid. If you wanted some fresh lab license love today, you’re out of luck.
Technically, you can get an MSDN subscription for several thousand dollars and have the same level of access. The Operating Systems level is “only” $699 (want other servers? You’re looking at $1199 to $6119). Or if you qualify for the Microsoft Partner Program as an IT solutions provider, you can use the Action Pack Solution Provider to get access to whatever is current in the Microsoft portfolio for about $300/year. But the latter is tricky in that you need to be a solutions provider and jump through hoops, and the former is tricky because you might not have several thousand dollars to send to Redmond every year.
Help me, Obi-Wan vExpert, you’re my only hope
In 2011, Mike Laverick started a campaign to reinstate the VMTN subscription program. The thread on the VMware communities forum is occasionally active even two years later. But after two years of increasing community demand and non-existent corporate support, a light appeared at the end of the tunnel last week at VMworld in San Francisco.
As Chris Wahl reported, Raghu Raghuram, VMware Executive Vice President of Cloud Infrastructure and Management, said the chances of a subscription program returning are “very high.” Chris notes that there’s not much detail beyond this glimmer of hope, but it’s more hope than we’ve had for most of the last 6 years. For those of you who remember Doctor Who between 1989 and 2005, yeah, it’s like that.
Today, your choices for a sustainable lab environment include being chosen as a vExpert (or possibly a Microsoft MVP–not as familiar with that program’s somatic components) with the ensuing NFR/eval licenses; working for a company that can get you non-expiring licenses; unseemly licensing workaround methods we won’t go into; or simply not having a sustainable lab environment.
I added my voice to the VMTN campaign quite a while ago. When nothing came of that campaign, and I found myself more engaged in the community, I applied for (and was chosen for) vExpert status. So the lab fulcrum in my environment definitely tilts toward the folks in Palo Alto, not Redmond.
But I did mention to the nice young lady handing out tee shirts at the Microsoft Custard Truck that I’d be far more likely to develop my Hyper-V skills if something like TechNet subscription came back. She noted this on her feedback notebook, so I feel I’ve done my part. And I did get a very comfy tee shirt from her.
When I got back to my hotel, I found that the XL shirt I’d asked for was actually a L. Had I not been eating lightly and walking way too much, it wouldn’t have come anywhere near fitting, and it probably won’t any more, now that I’m back to normal patterns. But maybe that size swap was an analogy for a bigger story.
One size doesn’t fit all.
If Microsoft and VMware can’t make something happen to help the new crop of IT professionals cut their teeth on those products, they’ll find the new technologists working with other products. KVM is picking up speed in the market, Xenserver is moving faster toward the free market (and now offers a $199 annual license if you want those benefits beyond the free version), and people who aren’t already entrenched in the big two aren’t likely to want to rebuild their lab every two months.
And when you layer Openstack or Cloudstack (yeah it’s still around) on top of the hypervisor, it becomes a commodity. So the benefits of vCenter Server or the like become minimal to non-existent.
So where do we go from here?
Best case, VMware comes up with a subscription program, and Microsoft comes up with something as well. Then you can compare them on even footing and go with what works for you and your career.
Worst case, try to live with the vCenter and related products’ 60 day trial. If your company is a VMware (or Microsoft) virtualization customer, see if your sales team can help, or at least take the feedback that you want to be able to work in a lab setting and spend more time testing than reinstalling.
And along the way, check out the other virtualization players (and the alternatives to VMware and Microsoft management platforms… even Xtravirt’s vPi for Raspberry Pi). Wouldn’t hurt to get involved in the respective communities, follow some interesting folks on Twitter and Google+, and hope for the best.
Did you say something about Doctor Who up there?
Yeah, and I should share something else with you.
When I saw the mention of the custard truck, my first thought was honestly not frozen concoctions in general. Obviously, it was the first Matt Smith story on Doctor Who, Eleventh Hour, wherein he tries to find some food to eat at Amy Pond’s home after regenerating. He ends up going with fish fingers (fish sticks) and custard (not the frozen kind).
So I made a comment on Twitter, not directed at anyone, saying “I’d have more respect for Microsoft’s Hyper-V Custard if fish fingers were offered on the side.”
And this really happened.
So even if they’re discouraging me and other technologists from effectively labbing their products, I have to give them credit for a sense of humor. Not usually what you expect to come out of Redmond, now is it?
- You’re All Nuts. Or I Am – Don Jones at Redmond Magazine
Mr Jones posted an article that really annoyed me until I read his well-reasoned response to the well-reasoned comments. Check out his interpretation of the TechNet subscription and brave the comments for some very sane discussions.
- Planes, trucks and frozen custard – Microsoft Server Team
- Get The “Scoop” On Hyper-V – Varun Chhabra, Sr PPM Server & Tools
A couple of pieces from the Microsoft team about their marketing activity. Fun read, and the source of the truck photos above.
tardis.wikia.com definitions and a BBC video clip from Youtube,to help you understand the Twitter exchange.
In the month and a half since we met Infinio, I’ve been planning to write about their presentation and their product. It’s an interesting technology, and something I can see being useful in small and large environments, but I hadn’t gotten around to piling the thoughts into the blog.
I did find that I was bringing them up in conversation more often than I do most Tech Field Day presenters (with the possible exception of Xangati). Whether I was talking to the CEO of a network optimization startup here in Silly Valley, or a sales VP for a well-established storage virtualization player at the Nth Symposium, or a couple of others, I found myself saying the same things. “Have you heard of Infinio? They just made a splash onto the scene at Tech Field Day 9. You should check them out.”
What is an Infinio?
Infinio is a two year old, 30ish-person startup whose Accelerator product is designed to be an easy drop-in to your VMware environment. They’re focusing on making the product easy to try (including substantial engineering focus on the installation process), simple and affordable to buy, and visibly useful to your environment as soon as possible.
CEO Arun Agarwal talked up the focus on the installation process, but even more interesting was his focus on the trial and sales model. This seemed important at the time, but as time passed, I really appreciated the idea more.
Just this past week, I downloaded a “free” VM from a much larger company, only to be told in a pushy followup email that I need to provide a phone number and mailing address and get trial licenses and talk to a sales guy on the phone to do anything with the “free” VM. It was annoying enough to get to this point, and I’m disinclined to actually buy and use that product.
I want a company to provide (1) enough information on their website for me to understand the product, (2) a hands-off model for acquiring and trying out the product (even if it’s at 2am on a Saturday because I can’t sleep and I’ve got a hundred servers sitting idle in a datacenter to play with), (3) smart and non-pushy people to help me with understanding, evaluating, and maybe buying the product if I do decide to move forward–when and if I need them, and not the other way around, and (4) a product that really solves the problem.
Infinio plans to provide all these things. You can download the trial without giving a lot of information (or any, as I recall), and you can buy your licenses with a credit card on the site. This would be a refreshing model, and I’m optimistic about their being able to do it.
So what are they doing?
I was wondering that too… and seeing the phrase “downloadable storage performance” a week or so before the visit, I was dubious.
The Infinio Accelerator is a virtual NAS (NFS v3) accelerator for VMware. It sits between the vmkernel interface and the storage server on each host, providing a shared, deduplicated caching layer to improve performance across your systems. It also works transparently to both storage and server, so you don’t change your storage settings or ACLs (great for those of us who have siloed storage, networking, and virtualization management teams, and all the efficiencies they provide).
And possibly most impressive of all, you don’t have to reboot anything to install or remove the product.
The management console allows you to toggle acceleration on each datastore, and more importantly, monitor the performance and benefit you’re getting from the accelerator. They call out improvements in response time, request offload, and saved bandwidth to storage.
Let’s make this happen
Peter Smith demonstrated the Infinio Accelerator for us live, from downloading the installer from the Infinio home page (coming soon) to seeing it make a difference. The process, with questions and distractions included, came in around half an hour.
You download a ~28MB installer, and the installer will pull down about a CD’s worth of VM templates (the Accelerator and the management VM) while you go through the configuration process. (You can apparently work around this download if you need to for network/security reasons–this would be a good opportunity to enlist those smart and non-pushy people mentioned above.)
After the relatively brief installation (faster than checking for updates on a fresh Windows 7 installation, not including downloading and installing all 150 of them, mind you), Peter brought up a workload test with several parallel Linux kernel builds in 8 VMs, demonstrating a 4x speedup with the Accelerator in place even with the memory per VM halved to make room for the Accelerator.
An aside about making room for Infinio: The accelerator will eat 8GB of RAM, 2 vCPUs, and 15GB of local disk space on each hypervisor host you’re accelerating. It will also use 4GB RAM, 2 vCPUs, and 20GB of storage for the management VM, on one of your hosts. So if your virtualization lab is running on your 8GB laptop, you’re gonna have a bad time, but a quad-core lab system with 32GB of RAM should be practical for testing. A typical production hypervisor host (128GB or more) will probably not notice the loss.
And a further aside about the demo system. As a big fan of Simon Gallagher’s vTARDIS concept of nesting hypervisors, I was pleased to see that the Mac Pro the Infinio folks rolled in for the demo was effectively a vTARDIS in itself. This is a pretty cool way to protect your live demo from the randomness of Internet and VPN connectivities and the very real risk that someone will turn your lab back at the home office into a demo for someone else, if your product lends itself to being demonstrated this way.
Some future-proofing considerations
The team at Infinio were very open to the suggestions that came up during the talk.
They have a “brag bar” that offers the chance to tweet your resource savings, but they understood why some companies might not want that option to be there. Some of us work (or have worked) in environments where releasing infrastructure and performance info without running the gauntlet of PR and legal teams could get us punished and/or fired.
They took suggestions of external integration and external access to the product’s information too, from being able to monitor and report on the Accelerator’s performance in another dashboard, to being able to work with the Accelerator from vCops. And they’re working on multi-hypervisor (read: Hyper-V) support and acceleration of block storage. Just takes enough beer and prioritization, we were told.
So where do we go from here?
Infinio is releasing a more public beta of the Accelerator at VMworld in San Francisco in just a couple of weeks. Stop by and see them if you’re at VMworld, or watch their website for more details about the easy-to-use trial. You can sign up to be notified about the beta release, or just watch for more details near the end of August.
The pricing will be per-socket, with 1 year of support included, and hopefully it will be practical for smaller environments as well as large ones. We will see pricing when the product goes to GA later this year.
I’m planning to get the beta to try out in my new lab environment, so stay tuned for news on that when it happens.
And if you’re one of the lucky ones to get a ticket for #CXIParty, you can thank the folks from Infinio there for sponsoring this event as well. And I may see you there.
Disclosure: Infinio were among the presenters/sponsors of Tech Field Day 9, to which I was a delegate in June 2013. While they and other sponsors provided for my travel and other expenses to attend TFD9, there was no assumption or requirement that I write about them, nor was any compensation offered or received in return for this or any other coverage of TFD9 sponsors/presenters.
Some other write-ups from TFD9 Delegates (if I missed yours, let me know and I’ll be happy to add it):
- Justin Warren, TFD9 Review: Infinio
- Alastair Cooke, Infinio, cache in your ESXi server for your NFS datastores
- Chris Wahl, Infinio Aims To Accelerate Your VMware NAS Workloads
 Update: When we talked with Infinio in June, they planned to include 3 years of support with the initial purchase. They are now planning to include 1 year with renewals beyond that being a separate item. This should make the initial purchase more economical, and make budgeting easier as well.