Taking POHO to Interop 2014 – Three Roads To Take

I’m looking forward to returning to Interop Las Vegas in under two weeks. Where has the winter gone? I know, I’m in Northern California, I can’t complain much about the weather.

interop-2014-banner

Click above for conference details, or visit this link for a free expo and keynote pass.

There are three aspects of Interop that I’m looking forward to.

First, I’m looking forward to meeting some Twitterverse friends, and maybe a Twitter-averse friend or two, as well as contacts I’ve made at my conferences last year. I will be dropping in on the Interop HQ and Social Media Command Center to see how the UBM team handles social media on-site. As my friends at @CiscoLive and VMworld know, I find the social media aspect of a conference to be as important as the formal content. Networking and getting advice and answers as you go makes the event more efficient and useful, and it’s always good to say hi to the folks who make it happen. I also hear there are collectible pins, and those of you who know where I work know we’re known for our pins, among other things.

Watch the hashtags #Interop and #CloudConnect and follow @interop for the latest news from the events.

cloud-connect-summit-logoSecond, I’ll be trying to take a bootcamp or two at the Cloud Connect Summit  and come up to speed on some technologies that are newish to me. There’s an AWS Boot Camp presented by Bernard Golden (alas, it’s not hands-on, so I’m not sure I’d call it a boot camp), and an OpenStack Boot Camp that looks promising as well. These may end up just being focus opportunities, or I may change my plans, but they look interesting. And as a guy who’s mostly running bare metal big data on a daily basis, it’ll be good to get some exposure to the virtual side of things outside of VMware.

Third, while I’m attending with my press hat and not my mouse ears, I do work in a sizable technology environment, so I’ll be checking out some larger technology options that may not find their way into my lab but may find their way into my day job.

Highlights in the enterprise space for me (alphabetically): Arista Networks, Cisco, Juniper Networks.

tfd-generalFourth, I’ll be joining the Tech Field Day Roundtables again this year. HP Networking will be presenting at this event, and they tie in with POHO below as well. Also presentingwill be a company rather dear to my heart in a strange way, Avaya. At the turn of the century, I worked for the Ethernet Products Group (or whatever we were called that quarter) at Nortel Networks, and my team’s flagship product was the Nortel Passport 8600 routing switch. Imagine my surprise when I ran across a slightly different color of 8600 (with much newer line cards) at the Interop network last year, now known as the Avaya Ethernet Routing Switch 8600. A couple of my Rapid City/Bay Networks/Nortel Networks coworkers are still at Avaya, or were until fairly recently… so it’s sort of a family thing for me.

If you can’t make it to the roundtables, we usually live-stream the presentations, or have them posted afterward, at TechFieldDay.com. Check it out and track #RILV14 and #TechFieldDay on Twitter for the latest news.

And last, but not least… there’s POHO. The Psycho Overkill Home Office, a gateway to big business functionality on a small business budget, is a topic near and dear to my blog, my budget, and my two home labs. I will be stopping by to speak with several vendors at Interop whose products intersect with the burgeoning (and occasionally bludgeoning) home lab market and the smaller side of the SMB world (I’m taking to calling it the one-comma-budget side of SMB).

Some of the POHO highlights that I’m seeing so far (in alphabetic order) include Chenbro Micom, Cradlepoint, Linksys (now part of Belkin), Memphis Electronic (think 16GB SODIMMs), Monoprice, Opengear, Shuttle Computer Group, Synology, and Xi3.

There are a lot of other names on the exhibitor list who will appeal to anyone, and if you’re going to be there with an exhibitor who you think would be of interest to my POHO audience, feel free to get in touch (I’m on the media list, or contact me through this blog).

And if you noticed that I went down five roads instead of three, give yourself a pat on the back. I should’ve seen that coming.

Upgrading my home VMware lab (part 1: Ivy Bridge) #rsts11

My most popular post on rsts11 has been my compact VMware server at home post. Thanks to Chris Wahl mentioning me on the VMware forums, and linking from his lab post, I see a dozen visits or more a day to that page.

Imitation is the sincerest form of laziness^wflattery

I have to admit that I’ve been a follower in my use of intriguing lab environments. I got the vTARDIS idea from Simon Gallagher, and built a version of it at work at my last job on a Dell Core 2 Quad workstation under my desk. Then I saw Kendrick Coleman tweet about this new SH67H3 from Shuttle that supported 32GB of non-registered RAM… bought one and put 32GB and an i7-2600S processor into it, as mentioned in the “server at home” post mentioned above.

Now as you may know, the i7-2600 series processors are now a generation behind. Sandy Bridge gave way to Ivy Bridge (the i7-3×00 processors) which are currently easily found at retail. But… SH67H3 v1.0 motherboards don’t support Ivy Bridge. And that’s what was shipping when I bought mine in early 2012.

I found an unbelievable deal on a second SH67H3 open (missing) box at Fry’s in February 2013… let’s just say I spent more on a basic Pentium chip to test it with than I did on the chassis itself. But alas, the second one also had a v1.0 motherboard.

Let’s make the ivy (bridge) grow!

I found sources on the Internets that said a v2.0 board supporting Ivy Bridge was out. I further discovered that Shuttle would trade in your v1.0 board for a v2.0 board for $40. Instructions here at Cinlor Tech’s blog if you’re interested in doing this yourself. Note that you can request the case number through Shuttle’s web-email portal if you prefer this to calling. That’s what I did.

sh67 corpsesI shipped off my two boards in a medium Priority Mail box to Shuttle on the 26th. On the 29th I got confirmation of the return shipment. They should be at my door on April 2nd. I’ll be reinstalling them, and at some point upgrading to the i7-3770s processors on both.

Waitasec, 2600 “S”? 3770 “S”? What’s this all about, then?

Yes, that’s correct. I chose to go with a low power version of the i7-2600 processor a year and change ago. The i7-2600s has a lower base speed than the 2600 or 2600k (unlocked version), 2.8ghz vs 3.4ghz. All three support turbo boost to 3.8ghz though. And the i7-2600s is 65W where the others are 95W.

(Here’s a comparison chart of the three i7-2600 and three i7-3770 processor options via Intel, if you’re curious.)

Other noteworthy differences are on the 2600k, which costs $20 more, but does not support VT-d (directed I/O), vPro management features, or Trusted Execution. VT-d is the only feature of particular concern when you’re building your virtualization lab though. (I’ll admit the VT-d was an accidental discovery–I chose the 2600s more for power savings than anything else). If you’re building a desktop, the “K” model has HD3000 graphics vs HD2000 for the other two chips, by the way.

Now that I’m building a second box, I find that my usual local retail sources don’t have the i7-2600s in stock anymore. I could order one on eBay or maybe find it at Fry’s, but for about the same price I could get the Ivy Bridge version and be slightly future-proofed. Once again, the “S” is the way to go.

The 3770 series run at 3.1ghz (“S”), 3.4ghz (3770), and 3.5ghz (“K”) base speeds, all turbo capable to 3.9ghz. The “S” processor is 65W again, vs only 77W for the other two chips. They all have Intel’s HD4000 integrated graphics and the newer PCIe 3.0 support. They support 1600mhz RAM speeds, vs 1333 top for the previous generation. The “K” processor lacks VT-d, vPro, and Trusted Execution, but does have a nearly $40 premium over the other two chips.

All six of these chips have VT-x including extended page tables (EPT/SLAT), hyperthreading, and enhanced SpeedStep. And they’re all 4 core/8 thread/32gb RAM capable processors that make a great basis for a virtualization environment.

nuc-scaleSo what’s next, Robert?

Well, with two matching machines, I’ll be basically starting from scratch. Time to upgrade the N40L Microserver NAS box to 16GB (Thanks Chris for finding this too!) and probably splitting off a distinct physical storage network for that purpose.

But now, thanks to Marco Broeken’s recent lab rebuild, I’ve been introduced to Intel’s Next Unit of Computing (NUC), so tune in soon for my experience with my first NUC system. Sneak peek of the ESXi splash screen and the actual unit here… stay tuned!

rsts11: Building my compact VMware server at home

About a year ago I bought a homebuilt Intel Core i7 (1st generation) desktop from a friend to run VMware ESXi on. He had gone to the trouble of assembling the system with a beautiful Gigabyte motherboard, and getting 4.1 to run on it, and I got a good deal on the system with 6GB of RAM and a 2TB hard drive.

I upgraded to 12GB, then to 24GB, but never put it into use.

Two months ago, I started it up and ran some computationally intensive software on it and discovered it was munching 320W. And it’s a mid-tower size case. Somewhat unwieldy for an apartment with a few other computers already running, and a significant other who doesn’t appreciate a living room that resembles a small colo.

It gets… smaller…

About that time, I think it was Kendrick Coleman who mentioned a new Shuttle barebones XPC system, the SH67H3, that in typical XPC form factor supported a second generation i7 processor and 32GB of RAM. Four slots of DDR3. Problem was threefold.

1) Shuttle on the VMware HCL? Unlikely.

1a) Onboard LAN and SATA controllers supported? Almost as unlikely.

2) 8GB DIMMs were expensive. And how could I in clear conscience run a system capable of 32GB with just 16GB of RAM?

3) Have you seen my holiday credit card bill?

So I was willing to risk 1a, live with 1 (as I’m not buying support or expecting it), and wait out 2 until memory prices came down.

Once 3 was resolved, I emptied my wallet into the cash register at Central Computers and bought the SH67H3 barebones XPC, and an i7-2600s (low power) processor. I had a pair of 2GB DDR3 DIMMs to use until I could upgrade, so I went about installing. I hung a SATA DVD drive off the system and installed ESXi 5 to the flash drive, and all went well.

Well, not quite.

Turned out one of the two DIMMs was bad, keeping the Shuttle from taking off, so to speak. Brief monitor sync and then it went out of sync, no beeps, no signs. I tried one DIMM, it worked; tried the other, it didn’t. Swapping the DIMM slots didn’t help. So I booted with one DIMM, 2GB, the minimum to run the ESXi installer.

No dice.

Turns out system reserved memory and/or shared video RAM managed to pull me under 2GB, and the installer quit on me.

So I realized I had 6 4GB DIMMs in the old VMware box, and I pulled two to get the Shuttle system going. Bueno. Just short of 10GB and it installed pretty well. The Shuttle disk and network were supported under ESXI 5.0.0 without any additional effort.

It got… better…

By the time this happened, I found some 8GB DDR3 DIMMs on Amazon from Komputerbay. These were not on the Shuttle compatibility list, but they were less than half the price, so I took a fortuitous risk. I’ve bought memory from them before, for the last ESXi server I built (at my last job), so I was willing to try out a pair. The memory was $58/stick, and I paid $10 for expedited shipping (twice, as I bought two pairs separately just in case).  They worked fine, survived memtest86+, and made me happy.

I added a 4GB Onyx flash drive from Maxell, a very low profile drive that hides on the back of the system, to install the hypervisor onto. (Picture shows it in an extension USB pod, to show how much it sticks out. It actually fits in a regular soda bottle cap.)

For disk storage, I put a four-drive SATA enclosure in the 5.25″ half-height bay, and occupied the two SATA3 and the two SATA2 ports on the motherboard. The first bay got a 50GB SATA2 SSD I had on hand, for the initial datastore, and the second has a 500GB 7200RPM SATA disk.

I’m almost embarrassed to admit that the first VM I built on this system was Windows 7 Professional, but it was. And it worked pretty well.

Then the little one spilled a handful of change behind an electric plug and blew up the circuit breaker, while I was away from home… so it’s been on hold for a little while.

What’s in the box?

I bought the following new:

  • Shuttle SH67H3 barebones ($240 at Amazon)
  • Intel Core i7-2600S processor, retail box ($300)
  • 4x Komputerbay 8GB DDR3 RAM ($53 per stick, $212 total)
  • Four-drive 2.5″ SATA cage ($71)
  • Intel PCIe x1 Gigabit Ethernet adapter ($40)

The following came from stock.

  • 4GB Maxell Onyx flash drive ($9)
  • 50GB OCZ Vertex 2 SSD ($126, much more when I bought it)
  • 500GB 7200RPM SATA drive ($120 today, much less when I bought it)

So to build the whole mess today, I’d pay about $1,118 plus tax and sometimes shipping.

What’s next, Rob?

Well, I’m going to be a bit limited by 4 2.5″ drive bays, although I will probably put some more drives in there. I have some 32GB SSDs that are gathering dust, and a couple of 500GB disks, so we’ll see how that goes. The Patriot Pyro SSDs are coming down in price (after rebate) at the local Fry’s store, so maybe I’ll make use of the SATA3 channels.

But for now, my next step is going to be a home NAS (that I threatened to do a while back) starting from an HP N40L Microserver. The Microserver, and its 8GB of DDR3 ECC RAM, came in last month. FreeNAS 8 is currently running on this system,  with an internal USB flash drive, although I’m tempted by OpenFiler’s ability to serve as a fibre channel target.

I will probably put the 8GB of RAM back into the mid-tower VMware box and use it as a second node, put some multiport cards into both ESXi servers, and power up a Summit 400-48T switch for the backbone of my virtualization network. I’m still watching for absurdly affordable PCIe 10GB Ethernet cards (since my Summit 400 has two 10GBE ports), but all I have for now is PCI-X, and only one of the three involved machines has even PCI.

I also now have a second location for lab equipment, as you may see in my write-up of the new store I’m starting. So the old desktop, and probably a Fibre Channel-enabled OpenFiler on a small SAN, will go over there. I can replicate across a 20ms latency link once, and have a pretty valid test environment for anything I’m likely to do.

Random thoughts

The LAN (RTL 8111E Gigabit Ethernet), and SATA onboard on the SH67H3 were supported out of the box, no oem.tgz tweaking needed. I had an Apricorn PCIe SSD/SATA3 controller that I plugged in with the SSD, but it wasn’t recognized by ESXi, so I went forward with the drive bay options.

I haven’t tried the SATA RAID on this system. I wouldn’t expect it to be supported, and I’d be inclined to use FreeNAS or OpenFiler or NexentaStor Community Edition to handle resilience, rather than the onboard RAID. If I get a chance, I’ll configure a pair of disks under the onboard RAID just to see how it works, or if it works. But it’s not a long-term prospect for this project.

Other people doing similar stuff

My friend Chris Wahl just put together his home whitebox systems. He went a bit more serverward, and he’s going with a pre-constructed NAS from Synology (which was tempting for me).

Kendrick Coleman wrote about his “Green Machines” project for his lab, and has built out a bit more (and put a bit more detail into his shipping list).

Simon Gallagher of vinf.net fame is well known in Europe for his vTARDIS projects, virtualizing virtualization inside virtual machines. Or as Nyssa said in Castrovalva, it’s when procedures fold back on themselves. I was reading about this, and doing a little bit of it on a quad core desktop at my last job, so I think he gets credit for my thinking about this scale of virtualzation in the first place.

Home NAS adventures, part 1

There’s a little Dell Optiplex SX280 next to my main desktop monitors, currently unplugged but with two external USB drives on it. It’s running Windows Home Server, but it’s usually off. I haven’t had the motivation to fix the issues with it, mostly from having a small primary drive and a shortage of table space. On top of that, we’re expecting that Microsoft will remove the Drive Extender feature from WHS in the new release, so I want something a bit more easily usable into the future.

So I’ve been pondering, for about six months, a home NAS to replace it, and maybe expand on it. So many options, from roll-your-own to hosting off my overpowered desktop to a purpose-built commercial appliance. I’ve promised a couple of BayLISA attendees such ponderings in blog form, so I’m finally getting around to it.

First requirement: Figure out the requirements

If all you need is external backups that are removable, a USB drive is probably good enough. If you want to back your VMware servers and feed your TiVo or smart TV, you need more. If you need 100k IOPS and n+2 redundancy, you need a different article.

Also, think about your budget. Sure, those pictures of your grandparents getting creative with Bob Marley are irreplaceable, but for the sake of moderation, think about what you can afford for the first generation of your home NAS. You can re-do it later when a new round of technology comes out, so don’t think of it as a permanent thing.

A model railroad buff I took some clinics with in the 90s referred to “givens & druthers.” You had your givens (this much space for your layout, this much money, your eyes able to see this small of a scale) which were not necessarily a unilateral choice, and your druthers (model the entire Florida East Coast Railroad with enough yard space to hold your 500 handpainted cabooses), and you’d decide how they best fit together and compromise accordingly.

So for my givens, I’ve decided that about $500 (plus disks) is my budget. I need gigabit Ethernet connectivity, independence from any particular PC I currently use, Windows/Mac/Linux connectivity on some level (CIFS is okay for this project), and a 5TB usable minimum. I am not stuck on hot-swappable disks, so they are not a requirement. I have two new-in-box 2TB 5900rpm “green” disks, and an unused 3TB USB3 external drive, that I’m willing to donate to the project if warranted. And these green disks (with 5yr warranty) are well under $100 so I can add more.

My druthers would be iTunes streaming, TiVo integration (although I have a Premiere XL and can expand it locally), bittorrent client, automatic backups (a feature I loved with the Windows Home Server), hot-swappable and auto-growable disks, and future expandability in number of disks, not just size of disks. I’d also like iSCSI and/or NFS for VMware backing stores.

Second step: Consider the top-level options

We have two options for this project, if we assume that hanging a pile of external disks on our PC is not an option.

* Build a PC with a lot of disks, to serve as a NAS appliance

* Buy a prebuilt custom NAS appliance

If you choose option 1, you get a lot more flexibility, lower up-front cost, and more expandability, but you have to consider the time factor, and software upgrades, and “support” if you’re so inclined.

If you choose option 2, you probably pay more up front, but you put less time into building/testing, you have someone to go to when something breaks or needs updating, and there are probably smoother interfaces to some of the features you want. Many prebuilt custom NAS appliances allow automatic hands-off disk expansion (other than plugging in the disks physically), so this is an appealing factor to some.

Prebuilt Custom NASes

We’ll start with this one, as it’s a bit more limited in scope.

When I started looking at a home NAS, the Infrant ReadyNAS NV was the top dog. This was probably 2007ish. I actually got a Thecus box for free on Craigslist, but I found the expansion to be painful (the system became unavailable for an entire day to upgrade from 4×160 to 4×320 hard drives) so I sold it and went back to USB drives for a while.

In the mean time, Synology and Drobo have both come out with a couple of generations of gear, Infrant was purchased by Netgear and has grown its product line as well, and there are a lot of smaller products like the BlackArmor line and some single-drive network-attach options from Seagate and WD.

Let’s look at current products that meet the requirements above.

Drobo has the Drobo FS, which offers five drive bays, single Gigabit Ethernet, Drobo PC Backup, Time Machine support for Mac, automatic rebuilding and capacity growing as new drives become available. List price is $699, but it can be had for a fair bit less from various authorized resellers, and rebates are often available as well. As of this writing, Amazon has it for about $560 and Drobo offers a $100 MIR bringing you under $500.

Synology has the DS411 product range, with the DS411 itself probably most closely matching the FS and the wish lists above. We have four drive bays, single Gigabit Ethernet with USB 2.0 and eSata ports for expansion, a number of A/V options including iTunes, DLNA, and BitTorrent, and a range of access protocols including iSCSI and NFS. List price on the Synology America store is $440, and discounts are available.

Netgear has the ReadyNAS Ultra and Ultra Plus lines, with a more powerful processor in the Plus at a slight premium. The Ultra 4 (RNDU4000)  is a suitable match, with four drive bays, dual Gigabit Ethernet, 3 USB ports for expansion and backup, three Memeo licenses for backup, iSCSI, DLNA/TiVo support, and a list price of $699. Discounts bring it to right around $500.

If you’re on a really tight budget, the ReadyNAS NV+ is still available. It’s a 4-drive unit with single Gigabit Ethernet, three USB ports, but a significant limitation that the others do not have. As the ReadyNAS NV+ is the older SPARC-based product line, it is getting firmware updates (as recently as last month) but drive support is limited to 2TB max per disk, and what I’ve read suggests this will probably not change. However, with a list price of $350 and much lower prices available at some retailers, it may be a good first step for some. If the limitation of 5.5TB or so is acceptable, you should consider this option.

But wait, there’s more…

In the next installment I’ll be looking at the options for a roll-your-own NAS, doing it myself with some parts around the house (to make sure I’m suggesting hardware and software that work together).

I’ll also be reporting back on the ReadyNAS NV+, which I learned this morning has third-party iSCSI support, and which I picked up this afternoon for $250 at Fry’s.

I do still expect to get a more modern device before Thanksgiving, but now I can ponder and save up a bit more, and maybe move up to a 6-8 drive device.

I welcome your thoughts on the above, suggestions regarding any of the products or anything I’ve forgotten (Iomega maybe?), etc. And feel free to use my links liberally to help fund my future home gear.