System Build Report: A Xeon-D “HTPC” for FreeBSD Corral or VMware vSphere

I’ve been planning to do some network testing and deploy some new storage for VMware vSphere in the home lab. My Synology NAS boxes (DS1513+ and DS1813+) have good performance but are limited to four 1GbE ports each, and my budget won’t allow a 10GbE-capable Synology this spring. [See below for a note on those $499 Synology systems on Amazon.] Continue reading

The Endpoint Justifies The Veeam: New free “personal” backup product coming soon

Apologies to Sondheim and Lapine for the updated title on this article.

Veeam announced their “Endpoint Backup FREE” product in the wee hours of the morning Wednesday, as about a thousand attendeees of the first-ever VeeamON user conference were still recovering from the event party at LIGHT nightclub in Las Vegas. More on VeeamON in another post later… but let’s get back to the new product for now.

Nope, this isn’t a hangover. Veeam, a leader in virtual machine backup/recovery and disaster recovery technology, is stepping out of the virtual world to allow you to back up bare metal systems. From early comments, this has been a long-awaited feature.

Veeam Endpoint Backup FREE

Endpoint Backup FREE is a standalone software package targeted at IT professionals and technophiles for use on standalone systems with local or networked storage. It should fit into anyone’s budget, and with flash drives and external USB drives coming down in price, none of us should have an excuse not to back up our personal laptops and desktops anymore (I’m talking to me here).

eblog5_thumb advanced recovery disk

Veeam offers an “Advanced Recovery Disk” that enables you to do a bare metal restore to a point in time. With some products you can restore from a backup image to a new disk or replacement computer, but you have to install and patch your OS from scratch first. Other products may limit you to local storage, or require driver alchemy, but with the Endpoint Backup recovery disk, you can boot from it (i.e. USB flash drive or optical media) and restore your full system image from a network share on your LAN.

Hey, can I back up a million Windows Servers with this product?

grumpy-cat

No, you can’t, and you shouldn’t.

Veeam are using a specific term in the product name–endpoint–to distinguish this offering from a bare metal server backup product. While it runs on Windows Server 2008 and later (as well as Windows 7 and later on the desktop side), it is being developed as a client OS backup solution. It does not have any central control or client management functionality, as it is a standalone program. This model doesn’t really scale for a large number of systems.

However, if you’ve virtualized all but two or three servers in your environment, or if you run a small number of physical servers in a home lab, this can cover that gap without having to license an additional enterprise product for a small number of legacy servers. You can even use a Veeam infrastructure as your backup target, whether backing up Windows Server or the standard desktop offerings.

Also, at this time Veeam does not support mobile devices (iOS, Android, Windows Phone, Sybian, Tizen, etc) so it is not a universal endpoint solution. You’ll want to either use your platform’s cloud option or something like Lookout or a carrier-specific app to back up your tablets and phones.

What are the downsides to this new product?

Well, the main thing for me personally is this (courtesy of Rick Vanover’s vBrownBag talk this morning):

Veeam Not Yet

It’s not available yet. Veeam employees are doing an alpha test now. A public beta is expected in November, with general availability (GA) offering in early 2015. However, for me it’s not that bad as it will take me a couple more weeks to have any free time, so for once I can probably wait patiently.

Another thing, which will probably affect a few of my readers:

no macs

That’s right, no Macs. At launch, and for the foreseeable future, Endpoint Backup FREE will only support Windows systems. Today there is no Linux or Mac OS X support. You can of course back up the Windows VM on your Mac with this product, but you’d have to use one of the server products to back up Linux, and if customers request Mac OS X support enough, they will likely consider it down the road.

And a fifth thing, that builds on the previous item:

Windows_me_logo

For reasons that should be obvious, Veeam has chosen to support only current Windows OS revisions. Windows 7 and later and Windows Server 2008 and later will be supported.

XP is out of service, and Vista is, well, Vista. Windows Server 2003 goes out of service next year. So for most users this will not be a major hindrance, but if your home lab has a lot of old Windows OSes, the Endpoint Backup FREE product will probably not fit your needs. And you should use this as an excuse to start upgrading (as if you needed any more reasons).

So where do we go from here?

It’s going to be an interesting year coming up, in the PC backup world. Veeam has a long history of free products, going back to their first product, FastSCP from 2006. Many technologically savvy end users will probably try out the new offering and then be tempted to check out Veeam’s other products if they haven’t already.

I wouldn’t be surprised to see this functionality integrated and expanded into a paid/enterprise grade offering in Veeam’s future, incorporating feedback from the beta and first production release of Endpoint Backup FREE. There’s some logic in expanding from there to supporting bare metal servers in a scalable way as well. If Veeam follows this path, the other big backup players may end up with a bit of heartburn.

You can sign up for the beta at go.veeam.com/endpoint and get notified when it’s available for download.

Disclosure: Veeam provided me with a complimentary media pass to attend VeeamON 2014. No other consideration was offered, and there was no requirement or request that I write about anything at the event. As always, any coverage you read here at rsts11 is because I found it interesting on its merits.

Introducing (and Expanding) the Asigra Cloud Backup Connector Appliance (from the Asigra Partner Summit)

As some of you know, I’m starting a new job soon working with software vendors integrating their products around Cisco platforms. While it’s not my day job yet, I’ve been pondering some less explored options to look into when I do get settled in.

This week I’m at the Asigra Partner Summit in Toronto, with my blogger/technologist hats on. I was a bit surprised to run into a Cisco 2900 ISR (Integrated Services Router) with a UCS E-Series blade module in it, in the hands-on-labs area of the Summit. For me, at least, it’s the unicorn of Cisco UCS; I’ve seen an E-Series system twice now, and once was in the Cisco booth at Cisco Live this year.

What’s this Cisco UCS E-Series all about?

284666[1]The Cisco UCS E-Series blade gives you a single Xeon E5 processor, three DIMM slots, 1-2 2.5″ form factor drives, a PCIe slot, and the manageability of standalone UCS servers without the infrastructure overhead that would be cost- and space-prohibitive in a single or dual node B-Series or C-Series deployment. It does not integrate with UCSM, although you can run multiple blades in an ISR. It’s an intriguing platform for remote office/branch office (ROBO) environments, with the capability to integrate your routing/switching/firewall/network services with your utility server needs, including backup and recovery.

But what’s it doing at the Asigra Partner Summit?

As it turns out, this “Asigra Cloud Backup Connector Appliance” deploys the Asigra Cloud Backup software with the ISR and E-Series platform. It makes sense, and while I wish I’d thought of it sooner, or they’d thought of it later, it is a pretty cool idea.

You can use the appliance as a standalone device, running Asigra DS-Client and DS-System software to collect and store your backups on internal storage. You can also use it as an aggregator or data collector running DS-Client, which would send the data to your DS-System server elsewhere (perhaps a standalone server on-site, or a datacenter or hosted vault).

The one catch is that you’re a bit limited on the internal storage. Cisco has certified 1TB SATA and 900GB 10K SAS drives for the E140DP blade, which means you’re capped at 2TB raw in the server. Asigra has incorporated deduplication in their backup software for over 20 years, so depending on your data you’ll probably see 8-10TB (or more) capacity, but you may still hit some limits.

How do we get around this capacity limit?

If you want to use your Cloud Backup Connector Appliance as a standalone service, I see two possible paths, but each has its drawbacks.

First, since the drive bays are standard 2.5 SATA form factor, you could install your own aftermarket 1.5TB or 2TB drives, doubling your capacity to 3-4TB raw. This means you’re managing your own disks though, and it could complicate Cisco support (although if you’re tearing into the gear you probably already know this and understand the risks).

Second, since you have a PCIe slot in the server, I could imagine either installing a PCIe flash card (such as the 3.2TB  Fusion-io “Atomic” ioMemory SX300 card just announced last week) or a SATA/SAS storage controller connected to some sort of external array.

There are two downsides to this second option. Cisco has not announced certification of anything but a quad-port Gigabit Ethernet or single-port 10-Gigabit Ethernet controller in the PCIe slot (so you’re blazing your own trail if you swap them out–they should work, but…). And if you put storage in that slot, you can no longer expand networking, and will be limited to two internal (chassis) ports and two external (RJ45) ports for Gigabit Ethernet networking. Oh, and a third concern is that you lose the encapsulation factor with your storage hanging off of the server rather than being inside the server.

As I ponder the pitfalls to the PCIe expansion option, I find myself wishing for a dual-Ethernet / SAS card similar to what Sun used to sell for Ethernet and SCSI back in the day. I think HP had a single port combo as well. Alas, both of those are antiquated and are PCI-X instead of PCIe. You could use FCoE from the 10-Gigabit Ethernet card if you have that infrastructure in place, but that might be beyond branch office scale as well.

So what are you saying, Robert?

I may be overengineering this. I’ve done that before. Dual 10-Gigabit in my home lab, for example.

For a branch office with ~20 500GB desktops, a pile of mobile devices, and a server or two, with judicious backup policies, you’re in good shape with the standard configuration. Remember, you’re deduplicating the OS and common files, compressing the backed-up data, and leaving the door open to expanding your Asigra deployment as your branch offices grow.

And if you choose to, you can run a hypervisor on your E-Series server, with Asigra DS-Client/DS-Server VMs as well as your own servers, to the limits of the hardware (6-core CPU, 48GB RAM). The system can boot from SD card, leaving the internal disk entirely for functional storage and VM data stores.

Where do we go from here?

Even with the 2TB raw disk limitation (which will probably be addressed eventually by Cisco), you have a very functional and featureful option for small offices, remote offices, and even distributed campus backup and recovery aggregation.

You get all the benefits of Asigra’s software solution, including agentless backup of servers and desktops, mobile device support, dedupe and compression, FIPS 140-2 certified encryption at rest and in flight, and Asigra’s R2A (Recovery and Restore Assurance) for ongoing validation of your backed-up data.

And you get the benefits of Cisco’s ISR and E-Series platforms for your networking services and server implementation. You can purchase pre-installed systems through an Asigra Service Provider, or if you already own an ISR with an E-Series server, your Service Provider can install and license Asigra software on your existing gear.

Disclosure:

I am attending the Asigra Partner Summit at Asigra’s invitation, as an independent blogger, and the company has paid for my travel and lodging to attend. I have not received any compensation for participating, nor have Asigra requested or required any particular coverage or content. Anything related on rsts11.com or in my twitter feed are my own thoughts and of my own motivation.

Also, while I am a Cisco UCS fanboy and soon to be a Cisco employee, any comments, observations, and opinions on UCS are my own, based on my personal experience as well as publicly available information from Cisco and other vendors. I do not speak for Cisco nor should any of my off-label ideas be taken to imply Cisco approval or even awareness of said musings.

These 3 hot new trends in storage will blow your mind! Okay, maybe not quite. (2/2)

I’ve attended a couple of Tech Field Day events, and watched/participated remotely (in both senses of the word) in a few more, and each event seems to embody themes and trends in the field covered. Storage Field Day 5 was no exception.

I found a couple of undercurrents in this event’s presentations, and three of these are worth calling out, both to thank those who are following them, and give a hint to the next generation of new product startups to keep them in mind.

This post is the second of a series of two, for your manageable reading pleasure. The first post is here.

Be sure to check out the full event page, with links to presenters and videos of their presentations, at http://techfieldday.com/event/sfd5/

3. The Progressive Effect: Naming Names Is Great, Calling Names Not So Much

Back at the turn of the century, it was common for vendors to focus on their competition in an unhealthy way. As an example, Auspex (remember them) told me that their competitor’s offering of Gigabit Ethernet was superfluous, and that competitor was going out of business within months. I’ll go out on a limb and say this was a stupid thing to say to a company whose product was a wire-speed Gigabit Ethernet routing switch, and, well, you see how quickly Netapp went out of business, right?

At Storage Field Day 5, a couple of vendors presented competitive/comparative analysis of their market segment. This showed a strong awareness of the technology they were touting, understanding of what choices and tradeoffs have to be made, and why each vendor may have made the choices they did.

Beyond that, it can acknowledge the best use for each product, even if it’s the competition’s product. I’ll call this the Progressive Effect, after the insurance company who shows you the competitor’s pricing even if it’s a better deal. If you think your product is perfect for every customer use case, you don’t know your product or the customer very well.

Once again, Diablo Technologies did a comparison specifically naming the obvious competitor (Fusion-io), and it was clear that this was a forward-looking comparison, as you can order a hundred Fusion-io cards and put them into current industry standard servers. That won’t work with most of the servers in your datacenter with the ULLtraDIMMs just yet. But these are products that are likely to be compared in the foreseeable future, so it was useful context and use cases for both platforms were called out.

Solidfire’s CEO Dave Wright really rocked this topic though, tearing apart (in more of an iFixit manner than an Auspex manner) three hyperconverged solutions including his own, showing the details and decisions and where each one makes sense. I suspect most storage company CEOs wouldn’t get into that deep of a dive on their own product, much less the competition, so it was an impressive experience worth checking out if you haven’t already.

There were some rumblings in the Twittersphere about how knowing your competitor and not hiding them behind “Competitor A” or the like was invoking fear, uncertainty, and doubt (FUD). And while it is a conservative, and acceptable, option not to name a competitor if you have a lot of them–Veeam chose this path in their comparisons, for example–that doesn’t mean that it’s automatically deceptive to give a fair and informed comparison within your competitive market.

If Dave Wright had gone in front of the delegates and told us how bad all the competitors were and why they couldn’t do anything right, we probably would’ve caught up on our email backlogs faster, or asked him to change horses even in mid-stream. If he had dodged or danced around questions about his own company’s platform, some (most?) of us would have been disappointed. Luckily, neither of those happened.

But as it stands, he dug into the tech in an even-handed way, definitely adding value to the presentation and giving some insights that not all of us would have had beforehand. In fact, more than one delegate felt that Solidfire’s comparison gave us the best available information on one particular competitor’s product in that space.

 

 

This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or other Tech Field Day post.

 

 

 

These 3 hot new trends in storage will blow your mind! Okay, maybe not quite. (1/2)

I’ve attended a couple of Tech Field Day events, and watched/participated remotely (in both senses of the word) in a few more, and each event seems to embody themes and trends in the field covered. Storage Field Day 5 was no exception.

I found a couple of undercurrents in this event’s presentations, and three of these are worth calling out, both to thank those who are following them, and give a hint to the next generation of new product startups to keep them in mind.

This post is one of a series of two, for your manageable reading pleasure. Part two is now available here.

Be sure to check out the full event page, with links to presenters and videos of their presentations, at http://techfieldday.com/event/sfd5/

1. Predictability and Sustainability Are The Right Metrics

There are three kinds of falsehoods in tech marketing: lies, damned lies, and benchmarks. Many (most?) vendors will pitch their best case, perfect environment, most advantageous results as a reason to choose them. But as with Teavana’s in-store tasting controversy, when you get the stuff home and try to reproduce the advertised effects, you end up with weak tea. My friend Howard Marks wrote about this in relation to VMware’s 2-million IOP VSAN benchmark recently.

At SFD5, we had a couple of presenters not stress best case/least real results, but predictable and reproducible results. Most applications aren’t going to benefit a lot from a high burst rate and tepid average performance whether it’s on the server hardware, storage back-end, or network. But consistent quality of service (QoS) and a reliable set of expectations that can be met (and maybe exceeded) will lead to satisfied customers and successful implementation.

One example of this was with Diablo Technologies, the folks behind Memory Channel Storage implemented by Sandisk as ULLtraDIMM. In comparing the performance of the MCS flash implementation against a PCIe storage option (Fusion-io’s product, to be precise), they showed performance and I/O results across a range of measurements, and rather than pitching the best results, they touted the sustainable results that you’d expect to see regularly with the product.

Sandisk themselves referred to some configuration options under the hood, not generally available to end users, to trade some lifespan for daily duty cycles. Since these products are not yet mass market on the level of a consumer grade 2.5″ SSD, it makes sense to make that a support/integration option rather than just having users open up a Magician-like product to tweak ULLtraDIMMs themselves.

Another example was Solidfire, who also advocated setting expectations to what would be sustainable. They refer to “guaranteed performance,” which comes down to QoS and sane configuration. Linear scalability

2. Your Three Control Channels Should Be Equivalent

There are generally three ways to control a product, whether it’s a software appliance, a hardware platform, or more. You have a command-line interface (CLI), a graphical user interface (GUI) of some sort–often either a web front-end or an applet/installed application, and an API for automated access (XML, REST, SOAP, sendmail.cf).

I will assert that a good product will have all three of these: CLI, GUI, API. A truly mature product will have full feature equity between the three. Any operation you can execute against the product from one of them can be done with identical effectiveness from the other two.

This seems to be a stronger trend than it was a couple of years ago. At my first Tech Field Day events, as I recall, there were still people who felt a CLI was an afterthought, and an API could be limited. When you’re trying to get your product out the door, before your competitor locks you out of the market, it could be defensible, much as putting off documentation until your product shipped was once defended.

But today, nobody should consider a product ready to ship until it has full management channel equality. And as I recall, most of the vendors we met with who have a manageable product (I’m giving Sandisk and Diablo Tech a pass on this one for obvious reasons) were closer to the “of course we have that” stance than the “why would we need that” that used to be de rigueur in the industry.

Once again, this is part one of two on trends observed at Storage Field Day 5. Part 2 is now available at this link.

This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or any other Tech Field Day post.