System Build Report: A Xeon-D “HTPC” for FreeBSD Corral or VMware vSphere

I’ve been planning to do some network testing and deploy some new storage for VMware vSphere in the home lab. My Synology NAS boxes (DS1513+ and DS1813+) have good performance but are limited to four 1GbE ports each, and my budget won’t allow a 10GbE-capable Synology this spring. [See below for a note on those $499 Synology systems on Amazon.] Continue reading

I’ve looked at clouds from both sides now – are they just vapor?

For those of you not of a certain age… a bit of a soundtrack for this post.

 

 

I wrote last month about the “antsle” “personal cloud server,” and a few people on Minds had a brisk but respectful debate over whether it was cloud, and whether there was more to cloud than cloud storage (i.e. Dropbox, Box, Owncloud, OneDrive, Sugarsync, etc).

It got me to thinking about how I’d define “cloud” and why others feel differently. So here’s a bit of a soft-topic consideration for you along the way.

I was first exposed to the buzzword around 2009, when a major PC and IT gear reseller from the midwest was trying to convince me on every call and email thread that I should buy The Cloud(tm). My rep never could tell me why, or what problem it would solve, a common shortcoming of quota-bound sales reps. I think the closest to a justification I ever got was “Just give a try, you’ll be able to tell where you can use it.” And I didn’t.

As the current decade rolled along, anyone running the server side of a client/server model called themselves The Cloud(tm). And of course, Amazon Web Services and other players came along to give their own definitions and market shares to the matter.

Today, at its extreme interpretation, anything not running in focus on your current personal device is probably considered “cloud” by someone. And to be fair to antsle, that’s where they fit in a way.  Continue reading

First look: Checking out the “antsle” personal cloud server

Most of you know I don’t shy away from building (or refurbishing) my own computers. I used to draw the line at laptops, but in the last couple of years I’ve even rebuilt a few stripped-for-parts Dell and Toshiba laptops for the fun of it. Warped definition of “fun,” I’ll admit.

So when I saw a Facebook ad for a “cloud server” called “antsle,” I was curious but unconvinced. It was something like this:

The idea is you’re buying a compact, fanless, silent microserver that, in addition to some fault-tolerant hardware (mirrored SSD, ECC RAM), includes a proprietary user interface for managing and monitoring containers and virtual machines. You can cram up to 64GB of RAM in there, and while it only holds two internal drives, you can add more via USB 2.0 or USB 3.0, for up to 16TB of officially supported capacity. Not too bad, but I’ve been known to be cheap and/or resourceful, so I priced out a similar configuration assuming I’d build it myself.  Continue reading

Overkill in the rsts11 lab workshop – a homelab update for 2017

After being chosen as a VMware vExpert for 2017 this month, I was inspired to get working on refreshing my vSphere “homelab” environment despite a busy travel month in late February/early March. This won’t be a deep technical dive into lab building; rather, I just wanted to share some ideas and adventures from my lab gear accumulation over the past year.

As a disclosure, while I do work for Cisco, my vExpert status and homelab building are at most peripherally-connected (the homelab at home connects to a Meraki switch whose license I get an employee discount on, for example). And even though I’m occasionally surprised when I use older higher end Dell or HP gear, it’s not a conflict of interest or an out-of-bounds effort. It’s just what I get a great deal on at local used hardware shops from time to time.

The legacy lab at Andromedary HQ

Also read: New Hardware thoughts for home labs (Winter 2013)

C6100

Stock Photo of a Dell C6100 chassis

During my last months at the Mickey Mouse Operation, I picked up a Dell C6100 chassis (dual twin-style Xeon blade-ish servers) with two XS23-TY3 servers inside. I put a Brocade BR-1020 dual-port 10GBE CNA in each, and cabled them to a Cisco Small Business SG500XG-8F8T 10 Gigabit switch. A standalone VMware instance on my HP Microserver N40L served the vCenter instance and some local storage. For shared storage, the Synology DS1513+ served for about two years before being moved back to my home office for maintenance.

The Dell boxes have been up for almost three years–not bad considering they share a 750VA “office” UPS with the Microserver and the 10Gig Switch and usually a monitor and occasionally an air cleaner. The Microserver was misbehaving, stuck on boot for who knows how long, but with a power cycle it came back up.

I will be upgrading these boxes to vSphere 6.5.0 in the next month, and replacing the NAS for shared storage throughout the workshop network.

The 2017 Lab Gear Upgrades

For 2017, two new instances are being deployed, and will probably run nested ESXi or a purpose-built single-server instance (i.e. an upcoming big data sandbox project). The two hardware instances each have a fair number of DIMM slots and more than one socket, and the initial purchase for each came in under US$200 before upgrades/population.

You may not be able to find these exact boxes on demand, but there are usually similar-scale machines available at Weird Stuff in Sunnyvale for well under $500. Mind you, maxing them out will require very skilled hunting or at least a four figure budget.

2017-01-16-20-05-27

CPU/RAM cage in the HP Z800

First, the home box is a HP Z800 workstation. Originally a single processor E5530 workstation with 6GB RAM, I’ve upgraded it to dual E5645 processors (6-core 2.4GHz with 12MB SmartCache) and 192GB DDR3 ECC Registered RAM, replaced the 750GB spinning disk with a 500GB SSD, and added two 4TB SAS drives as secondary storage. I’ve put an Intel X520 single-port 10GbE card in, to connect to a SFP+ port on the Meraki MS42P switch at home, and there are two Gigabit Ethernet ports on the board.

2017-02-12-14-54-28

CPU/RAM cage in Intel R2208 chassis

And second, the new shop box is an Intel R2208LT2 server system. This is a 2RU four-socket E5-4600 v1/v2 server with 48 DIMM slots supporting up to 1.5TB of RAM, 8 2.5″ hotswap drive bays, and dual 10GbE on-board in the form of an X540 10GBase-T dual port controller.  I bought the box with no CPUs or RAM, and have installed four E5-4640 (v1) processors and 32GB of RAM so far. There’s more to come, since 1GB/core seems a bit Spartan for this kind of server.

There’s a dual 10GbE SFP+ I/O module on its way, and this board can take two such modules (or dual 10GBase-T or quad Gigabit Ethernet or single/dual Infiniband FDR interfaces).

The Z800 is an impressively quiet system–the fans on my Dell XPS 15 laptops run louder than the Z800 under modest use. But by comparison, the Intel R2208LT2 sounds like a Sun Enterprise 450 server when it starts up… 11 high speed fans warming up for POST can be pretty noisy.

So where do we go from here?

Travel and speaking engagements are starting to pick up a bit, but I’ve been putting some weekend time in between trips to get things going. Deploying vSphere 6.x on the legacy lab as well as the new machines, and setting up the SAN and DR/BC gear, will be spring priorities, and we’ll probably be getting rid of some of the older gear (upgrading the standalone vCenter box from N40L to N54L for example, or perhaps moving it to one of the older NUCs to save space and power).

I also have some more tiny form factor machines to plug in and write up–my theory is that there should be no reason you can’t carry a vSphere system anywhere you go, with a budget not too far above a regular-processor-endowed laptop. And if you have the time and energy, you can do a monster system for less than a high-end ultrabook.

 

Disclosure: Links to non-current products are eBay Partner Network links; links to current products are Amazon affiliate links. In either case, if you purchase through links on this post, we may receive a small commission to pour back into the lab.

 

What planet are we on? (The Third) — the RSTS11 Interop preview

Greetings from Fabulous Las Vegas, Nevada. For the third year, with apologies to Men Without Hats, I’m back in the Mandalay Bay Convention Center for Interop. 

This week, I’m actually a man without hats as well. My Big Data Safari hat is in my home office, and my virtual Cisco ears are back at home as well, next to the VPN router that was powered down before I headed for the airport. (Alas, after moving from Disney to Cisco, I lost the theme park discounts and the epic mascot reference.)

What are you up to at Interop this year, Robert?

So why am I back at Interop, when a dozen conference calls a day could have been in the cards for me this week? 

My readers, my fans, and my groupie all know that I’ve been a fan of the Psycho Overkill Home Office (POHO) for quite a while, going back to when I had a 19-server, 5-architecture environment with a 3-vendor network in my spare bedroom. Today it’s about 12 servers, all x64 (Shuttle, Intel, Cisco, Supermicro, Dell, and maybe another secret brand or two), and technically a 5-vendor network, but the idea is similar enough.

And having built a couple of startups up from the under-the-desk model to a scalable, sustainable production-grade infrastructure, the overkill in my home office and labs has led to efficient and effective environments in my workplaces. 

This week I’m taking a break from my usual big data evangelism and the identity aspects of working for a huge multinational juggernaut. It’s a bit of a relief, to be honest; earlier this month I attended my first event in 10 months as a non-booth-babe, and now I’m getting to focus on my more traditional interests. 

What’s on the agenda this week?

I’m looking forward to return visits to the folks at Sandisk, Opengear, and Cradlepoint. Cradlepoint was the first interview I did two years ago at Interop 2013, and I’ve been a customer on my own for many years; Opengear was a presenter at Tech Field Day Extra at Cisco Live 2013; and I last talked with Sandisk at Storage Field Day 5 about a year ago, as well as having been a Fusion-io customer at a previous job. 

I have a couple of other meeting requests out, so we may hear from a couple of other POHO/SOHO/ROBO/lab staples, and I’ll at least be dropping by their booths in the Interop Expo to see what’s new. 

While I’m only recording this week for notetaking convenience, I am starting to ponder what to do about the podcast I’ve been thinking about for a couple of years. So maybe I can pull in some interesting people from time to time… last night’s conversation over Burger Bar shakes with Chris Wahl and Howard Marks probably would have been fodder for several podcasts alone (and I don’t think any of us even had any alcohol!).

And seeing as a number of my friends are presenting this year, including Chris and Howard, I’ll be trying to make my way to their sessions (although there’s a LOT of overlap, and triple-booking isn’t uncommon… there’s a lot more than the Expo floor to experience at Interop, as always).

So where do we go from here?

If you’re at Interop, who are you looking forward to seeing/hearing/heckling/buying drinks for? (And if you’d like to meet up, catch me on Twitter at @gallifreyan.) If not, check out the exhibitor list at interop.com/lasvegas and let me know who you are curious about on that list.