System Build Report: A Xeon-D “HTPC” for FreeBSD Corral or VMware vSphere

I’ve been planning to do some network testing and deploy some new storage for VMware vSphere in the home lab. My Synology NAS boxes (DS1513+ and DS1813+) have good performance but are limited to four 1GbE ports each, and my budget won’t allow a 10GbE-capable Synology this spring. [See below for a note on those $499 Synology systems on Amazon.] Continue reading

I’ve looked at clouds from both sides now – are they just vapor?

For those of you not of a certain age… a bit of a soundtrack for this post.

 

 

I wrote last month about the “antsle” “personal cloud server,” and a few people on Minds had a brisk but respectful debate over whether it was cloud, and whether there was more to cloud than cloud storage (i.e. Dropbox, Box, Owncloud, OneDrive, Sugarsync, etc).

It got me to thinking about how I’d define “cloud” and why others feel differently. So here’s a bit of a soft-topic consideration for you along the way.

I was first exposed to the buzzword around 2009, when a major PC and IT gear reseller from the midwest was trying to convince me on every call and email thread that I should buy The Cloud(tm). My rep never could tell me why, or what problem it would solve, a common shortcoming of quota-bound sales reps. I think the closest to a justification I ever got was “Just give a try, you’ll be able to tell where you can use it.” And I didn’t.

As the current decade rolled along, anyone running the server side of a client/server model called themselves The Cloud(tm). And of course, Amazon Web Services and other players came along to give their own definitions and market shares to the matter.

Today, at its extreme interpretation, anything not running in focus on your current personal device is probably considered “cloud” by someone. And to be fair to antsle, that’s where they fit in a way.  Continue reading

First look: Checking out the “antsle” personal cloud server

Most of you know I don’t shy away from building (or refurbishing) my own computers. I used to draw the line at laptops, but in the last couple of years I’ve even rebuilt a few stripped-for-parts Dell and Toshiba laptops for the fun of it. Warped definition of “fun,” I’ll admit.

So when I saw a Facebook ad for a “cloud server” called “antsle,” I was curious but unconvinced. It was something like this:

The idea is you’re buying a compact, fanless, silent microserver that, in addition to some fault-tolerant hardware (mirrored SSD, ECC RAM), includes a proprietary user interface for managing and monitoring containers and virtual machines. You can cram up to 64GB of RAM in there, and while it only holds two internal drives, you can add more via USB 2.0 or USB 3.0, for up to 16TB of officially supported capacity. Not too bad, but I’ve been known to be cheap and/or resourceful, so I priced out a similar configuration assuming I’d build it myself.  Continue reading

Overkill in the rsts11 lab workshop – a homelab update for 2017

After being chosen as a VMware vExpert for 2017 this month, I was inspired to get working on refreshing my vSphere “homelab” environment despite a busy travel month in late February/early March. This won’t be a deep technical dive into lab building; rather, I just wanted to share some ideas and adventures from my lab gear accumulation over the past year.

As a disclosure, while I do work for Cisco, my vExpert status and homelab building are at most peripherally-connected (the homelab at home connects to a Meraki switch whose license I get an employee discount on, for example). And even though I’m occasionally surprised when I use older higher end Dell or HP gear, it’s not a conflict of interest or an out-of-bounds effort. It’s just what I get a great deal on at local used hardware shops from time to time.

The legacy lab at Andromedary HQ

Also read: New Hardware thoughts for home labs (Winter 2013)

C6100

Stock Photo of a Dell C6100 chassis

During my last months at the Mickey Mouse Operation, I picked up a Dell C6100 chassis (dual twin-style Xeon blade-ish servers) with two XS23-TY3 servers inside. I put a Brocade BR-1020 dual-port 10GBE CNA in each, and cabled them to a Cisco Small Business SG500XG-8F8T 10 Gigabit switch. A standalone VMware instance on my HP Microserver N40L served the vCenter instance and some local storage. For shared storage, the Synology DS1513+ served for about two years before being moved back to my home office for maintenance.

The Dell boxes have been up for almost three years–not bad considering they share a 750VA “office” UPS with the Microserver and the 10Gig Switch and usually a monitor and occasionally an air cleaner. The Microserver was misbehaving, stuck on boot for who knows how long, but with a power cycle it came back up.

I will be upgrading these boxes to vSphere 6.5.0 in the next month, and replacing the NAS for shared storage throughout the workshop network.

The 2017 Lab Gear Upgrades

For 2017, two new instances are being deployed, and will probably run nested ESXi or a purpose-built single-server instance (i.e. an upcoming big data sandbox project). The two hardware instances each have a fair number of DIMM slots and more than one socket, and the initial purchase for each came in under US$200 before upgrades/population.

You may not be able to find these exact boxes on demand, but there are usually similar-scale machines available at Weird Stuff in Sunnyvale for well under $500. Mind you, maxing them out will require very skilled hunting or at least a four figure budget.

2017-01-16-20-05-27

CPU/RAM cage in the HP Z800

First, the home box is a HP Z800 workstation. Originally a single processor E5530 workstation with 6GB RAM, I’ve upgraded it to dual E5645 processors (6-core 2.4GHz with 12MB SmartCache) and 192GB DDR3 ECC Registered RAM, replaced the 750GB spinning disk with a 500GB SSD, and added two 4TB SAS drives as secondary storage. I’ve put an Intel X520 single-port 10GbE card in, to connect to a SFP+ port on the Meraki MS42P switch at home, and there are two Gigabit Ethernet ports on the board.

2017-02-12-14-54-28

CPU/RAM cage in Intel R2208 chassis

And second, the new shop box is an Intel R2208LT2 server system. This is a 2RU four-socket E5-4600 v1/v2 server with 48 DIMM slots supporting up to 1.5TB of RAM, 8 2.5″ hotswap drive bays, and dual 10GbE on-board in the form of an X540 10GBase-T dual port controller.  I bought the box with no CPUs or RAM, and have installed four E5-4640 (v1) processors and 32GB of RAM so far. There’s more to come, since 1GB/core seems a bit Spartan for this kind of server.

There’s a dual 10GbE SFP+ I/O module on its way, and this board can take two such modules (or dual 10GBase-T or quad Gigabit Ethernet or single/dual Infiniband FDR interfaces).

The Z800 is an impressively quiet system–the fans on my Dell XPS 15 laptops run louder than the Z800 under modest use. But by comparison, the Intel R2208LT2 sounds like a Sun Enterprise 450 server when it starts up… 11 high speed fans warming up for POST can be pretty noisy.

So where do we go from here?

Travel and speaking engagements are starting to pick up a bit, but I’ve been putting some weekend time in between trips to get things going. Deploying vSphere 6.x on the legacy lab as well as the new machines, and setting up the SAN and DR/BC gear, will be spring priorities, and we’ll probably be getting rid of some of the older gear (upgrading the standalone vCenter box from N40L to N54L for example, or perhaps moving it to one of the older NUCs to save space and power).

I also have some more tiny form factor machines to plug in and write up–my theory is that there should be no reason you can’t carry a vSphere system anywhere you go, with a budget not too far above a regular-processor-endowed laptop. And if you have the time and energy, you can do a monster system for less than a high-end ultrabook.

 

Disclosure: Links to non-current products are eBay Partner Network links; links to current products are Amazon affiliate links. In either case, if you purchase through links on this post, we may receive a small commission to pour back into the lab.

 

Internet on the Road, part 2 – how to optimize your travel connectivity

rsts11 note: This is the second of a two-part series featuring mobile internet routers. The first part is posted over on rsts11travel.com, as it is a bit milder technology. The second part appears on #rsts11 since it’s a bit more POHO than random travel, and will be cross-promoted on the travel side. 

When you travel, you probably have a number of devices that demand connectivity.

Many venues limit your allowed devices, and maybe you don’t want your devices out on the open network. Additionally, you may want to use streaming devices or shared storage in your room, and that may not work with typical public network setups. Last time we looked at some battery powered routers with charging functions and other network features.

Today on rsts11 we’ll look at some choices for sharing a wired connection as well as a cellular modem. We’ll briefly revisit the Hootoo and Ravpower routers from part 1, and then dive into Meraki, Peplink, and Cradlepoint devices for the higher-power user.  Continue reading