Enterprise-Class networking on the cheap for your home lab

This entry is part of my POHO (Psycho Overkill Home Office) series. 

I have a habit of overdoing my networking. My home core switch is a Juniper EX-series (courtesy of a Bay Area Juniper User Group meeting raffle), and for a while I had a 10-Gigabit Ethernet (10GbE) Extreme Networks switch (that cost less than a good laptop) ready to go in. Do I really need it? Probably not. I sold it last year but am now thinking about 10GbE again.

I’m here today to share some of my tips for finding affordable enterprise-class networking for your personal, home, lab, or photo shoot purposes. I also welcome your thoughts and experiences in the comments below. Let me know if you’ve found other ways to improve your dollars-to-metal KPIs at home or in a lab.

Caveat: I do not advocate these methods for anything your company may depend on, or anything production grade in general. However, if you’re closer to hobby network than to Fortune 500 core network, this may help you build beyond your budget.

Foreword: About 10 Gigabit Transceiver Formats

There are three formats or “sockets” for transceivers for 10GBE ports. Each of them can present short (SR), long (LR), or extended range (ER/XR/ZR) fiber interfaces, or “captive” cables in the form of CX4, twinax, or RJ45-type copper.

Module formats courtesy of  @networkhardware http://www.networkhardware.com/cisco-optics-cheat-sheetXENPAK are the oldest, largest, and least expensive (generally) of the form factors, and can be found integrated into older host interface cards as well as switches like the Extreme Summit 400. XFP are almost as old, smaller and lighter, and probably a bit more pricey than XENPAK. The current generation is SFP+, which has the same size as the gigabit SFP (almost the same size as a RJ45 connector) but supports 1 and 10 gigabit. (The second transceiver pictured above is X2, which I haven’t run into yet.)

Depending on distance between ports and the host adapter you choose, you may find one of these more desirable than the others. Fiber may be more flexible in terms of length and availability, and you can go from SC to LC if your transceivers don’t match, but CX4 or twinax will be more resilient to tension.

Be warned that some networking vendors break the standards and check for their own brand of transceivers, not allowing generic or other brand devices even if they are physically identical down to the manufacturer.  Vendor forums or a quick Google search will help you track down these issues and plan for, or work around, them.

1. Uplink ports are usually just ports. Stacking ports, not so much.

The cheapest and (maybe) easiest way to get 10GbE ports is to buy a 1GbE switch that has a few 10GbE uplink ports. My Extreme was this kind of solution–48 ports of 10/100/1000Base-T[X] and a two-port module on the back for 10GbE via XENPAK modules. There are a lot of switches out there on the used market that offer 2-4 “uplink” ports that can be used to connect a host.

Be warned that in many cases, the uplink modules will cost more than the base switch, if they’re not already included. I’ve been watching for the 10GBE uplink modules for a couple of my cheap 48 port switches, and they’re 4-5x what I paid for those switches. Dig into your datasheets and search for the module part numbers, and see what they’re going to cost you before investing in a base switch. (And see #2 below on this topic too.)

And don’t forget about optics. You can mix SC and LC endpoints with fairly affordable fiber cables, but if your uplink modules don’t have fixed connectors, you will need to get some sort of transceivers (optical, CX4, captive cables, TX) that can connect with your host adapters.

You might be able to use stacking ports as regular network ports, but that will warrant more research (and maybe finding a friend at the network vendor). I wouldn’t count on this option unless you already know otherwise.

2. Look for rebranded (or debranded) gear.

I got my first pair of 10GbE host adapters for about $23 each shipped from an eBay seller. Fully functional, PCI-X (backwards compatible with PCI if you don’t need full 10 gigabit speeds), with optical XENPAK modules built in.

Xframe_II_lores courtesy of hp.com information library http://on.rsts11.com/1buRCPG

Why so cheap? The seller had posted them with an HP part number, which 99% of the time returned compatibility only with HP/UX. Turns out they are very compatible with Linux, and while they don’t appear to be supported under VMware, you could put something like that in a homebrew SAN and run it into the core network at 10GbE.

Not everything is available rebranded, but you’ll find some network companies selling their products under IBM or Sun or Dell labels or the like. Switches, firewalls, and expansion modules have been known to show up under multiple vendors’ part numbers, and sometimes one vendor’s part can be half the price of the original manufacturer’s part, for identical metal and silicon.

This goes back quite a ways, at least to when Dell resold the Netgear FS switch line, Cisco resold QNAP storage arrays and HP Proliant servers, and IBM resold Brocade Silkworm SAN switches. Just like searching for typos, searching for alternate part numbers may help you get a good deal.

3. If you have a bit of budget, ask around about new and refurbished gear.

The big PC makers have outlet and financial services stores on their websites that sometimes have off-lease models for sale. There may also be first-market options if you have a bit more of a budget than I do.

ICX6430 via brocade.com http://on.rsts11.com/1buS1l3Recently on Twitter Gabe Chapman and were discussing low-density 10GbE options; Garrett from Brocade chimed in to point us toward a Brocade ICX6450-24 switch as an option. A quick web search showed that for about $3000 street price, you can get a brand new switch with 4 licensed 10GbE ports and a warranty. You could get it for about $2300 with only 2 10GBE ports, and then buy the extras later (or run them at gigabit speeds).

If you’re stocking a little closer to the revenue end of the network, or if you can’t quickly replace a failed model, you might be better off choosing new/retail over a $400 switch as-is on eBay with the same port count.

Disclaimer: I have no connection with Brocade; this is just an example from a Twitter conversation that included a helpful Brocade guy and a quick Google Shopping search. Although I’d be willing to review an ICX6450-24 if the option arose. 🙂 

4. Watch out for port/feature licensing!

Some vendors, especially those in the Fibre Channel world, offer port licensing as a way to reduce the initial outlay for a switch.  A lot of smaller FC switches worked this way, and when I had to sneak a SAN into the budget at a company where the finance folks insisted all servers cost $1000 because that’s what a PC costs at Fry’s, it saved my storage plan from extinction.

In the case of the ICX6450-24 above, the base switch has four SFP+ ports, two of which are enabled for 10GbE out of the box, and two of which are limited to regular gigabit speed. To get the other two ports up to 10GbE, you buy a license kit for about $800 (street) and enable the ports. That’s not too bad for a brand new enterprise switch, but if you buy a ten year old switch that has unlicensed ports, you may have trouble getting the license codes (even for a price).

You’ll probably want to talk to someone knowledgeable about the platform you’re considering, to evaluate the risk. Some vendors tie features to a serial number (Juniper for example), so as long as your device is licensed and the serial number is intact, you may be able to reinstall the licenses automatically. Others require a key code, so unless you can get into the switch and retrieve that, a factory reset could wipe out some of your ports. And in either case, if the equipment you’re buying doesn’t have the features you want, it may be expensive or impossible to obtain them.

5. Consider port-channel or other aggregation methods.

I’ll admit this is sort of a cop-out, but 8 ports of Gigabit Ethernet will likely be cheaper than a single port of 10GbE. You can get 4-port PCI-E 1GBE cards for $75 or so (as low as $25 if you don’t need ESXi 5/5.5 support), and a 48-port GigE switch that supports LACP or the like for under $200. So that’s under $150 per 8gbit link including cables. Check your OS or virtualization platform HCL to make sure the cheaper cards are compatible, of course, but it’s worth checking out this option if it works for your needs.

So where do we go from here?

Those are my tips so far… I’d welcome your comments below on how (or if) they’ve worked for you, or if you have any tips from your own experience to share with other readers. Maybe you’ve found a vendor whose 10GbE switches are more affordable for the home lab, or just had a good experience with a home-lab-friendly reseller? Please chime in.

New hardware thoughts for home labs (Winter 2013)

It’s been almost two years since I wrote my first home lab post, on the occasion of rolling a Shuttle SH67H3 VMware server. Since then, I’ve rambled on Twitter about a lot of other options, and figured I would bring some of them to your more-easily-searched-for attention.

I will update this post in the near future – most recent update 2013-12-12 – so you can look (probably at the bottom) for new details and references.

Disclosure: I’m not paid or coerced to promote the items in this post. Anything I own below was bought with my own money. Most of it probably will not blend. Any references to vendors or manufacturers are based on my experience and not any consideration from the company.

Many of the links are to Amazon.com, and if you buy through them, I get a small commission credit to spend on more coffee gear or some of the same things. I appreciate your support and suggestions.

My lab cluster today

I recently bought two batches of rackmount servers at absurd prices. We’re talking less-than-the-memory-was-worth prices. For now, I have an NEC Express 5800/120Rh-1 (dual E5405/16GB) and a HP DL365 G1 (dual-core Opteron 2214HE/16GB) running 5.5. vCenter Server is running on my NUC i3 box out of convenience. When I get some more PC2-5300F RAM, I’ll switch out that Opteron for another Xeon to get a bit more consistency.

The downside to this environment is that it’s noisy and a bit power-hungry. At rest, the two servers use about 400W. So until I upgrade the UPS, I’m a bit stuck on that level of server.

But the upside is that the two servers as configured cost less than I spend on coffee in a month at home. And my lab is in a location that isn’t as sensitive to power load or noise as my home office might be.

Using a Dell Poweredge C6100 for dense rackmount computing

There are a lot of 1u and 2u rackmount servers out there on Craigslist, eBay, Weird Stuff, and such venues. I’ve picked up various HP boxes for chump change and scrounged for memory, so it is an option. You can probably get a dual socket 8-core server (DL160, DL360, DL365, DL380, DL385) with some memory and drive trays for under $100 until you run out of power outlets. If your tolerance for power draw and noise allow, that’s definitely a cost-effective way to go.

C6100

There are also a lot of Dell C6100 “blade” servers (pictured above) out there as well. These are 2u enclosures with up to 4 two-socket nodes. Each blade can take 12 DIMMs (up to 192GB), two quad or hex core Xeon processors, and 3 LFF 3.5 drives or 6 SFF 2.5 drives (SATA, SAS, SSD). And from what I’ve read, you can run four dual-L5420 blades at about 300W.

I’m seeing these priced at around $750 for a two-L5520-node config, or a four-L5420-node config, with minimal RAM. You can find a four-L5520-node config for around $1k, or you can add extra nodes later. ServeTheHome has a thread on community update findings, including fan improvements and internal USB.

I don’t know what the noise level is out of the box, but hopefully one of my readers can chime in. Or I may pick one up next month and come back with an update.

Ye Olde HP Proliant Microserver… And Ye Newe Microserver

I have a Proliant Microserver N40L in my environment. It, and its siblings N36L and N54L, are classic home lab servers, with secret BIOS tweaks and undocumented memory upgrades and a $200-300 price tag. Much like the NUC, they are, but perhaps a bit less processing power and a lot more expandability.

Microserver Gen 8

Well, HP released their Microserver Gen 8 this summer, with two dual-core Pentium processor options. One option has a G1610T 2.3GHz processor, and the other has a G2020T 2.5Ghz processor; there’s even a stackable 8-port switch to match. You still get four non-hot-plug SATA bays; the new ones offer a glitzier front door and a laptop-size optical drive bay. You also get dual gigabit Ethernet and a dedicated iLo port.

The price has gone up with the specs; you’re looking at $450-500 for the base 2GB/250GB system, plus your upgrades, so probably $700 with 16GB of RAM.

Be sure not to purchase the Windows Server bundles (unless you’re into that sort of thing). The Microserver Gen 8 shows up in bundles between $700-1200 with various Windows licenses included, and if you’re throwing your own OS on afterward, there’s no reason to shell out the extra money.

NUC NUC… not again…

Intel has added new Next Unit of Computing (NUC) models to their line, with 4th generation i3/i5 processors. There’s an i3-4010U model and an i5-4250U model available. Perhaps obviously, they’re no longer fanless or silent, but probably quieter than the options above.

Wilson Canyon NUC with USB

You still need to add your power cable, some laptop memory (8GB or 16GB depending), an mSATA module if you want internal storage, and a flash drive to boot from. So you’re probably looking at about $600 for a complete system, give or take. But if space is of an essence, and your workloads can handle 16GB dual core modules, this is a great option.

As an aside, Intel has 4th generation NUCs with support for an internal 2.5″ drive. These don’t seem to be as commonly available, but it’s something to watch for if you need more internal storage.

A surprising contender – Dell’s Inspiron 660 desktop

I was having an exchange on Twitter with someone looking for options with Gen 3 PCI Express for virtualization lab use. He ended up getting an Inspiron 660 desktop, which has more convenient expansion options than pretty much everything above.

The i5-3340 model with 8GB of RAM comes in under $600 on Amazon (you can buy it directly from Dell but might get quicker delivery from Amazon). You should be able to load it up with 16GB of RAM, and you can get 4x and 6x SATA (and 4x SAS) drive bay inserts to get dense 2.5″ drive deployments. Probably won’t need that DVD burner on a hypervisor platform, will you?

What else can I read about home lab options?

I’m glad you asked. One thing that pushed me to write this post was Chris Wahl’s update on his home lab. He’s moving to Haswell, and building out a well-optimized lab. He’s an avid advocate of remote management, so definitely take a look at his board selection if you need remote control of your server.

Simon Seagrave at TechHead has a lengthy write-up on the Microserver Gen 8 that’s worth a look if you’re leaning that way.

2013-12-12: Erik Bussink has built a compact lab with the Shuttle XH61V that finds a happy medium between my Shuttle and NUC builds.

2013-12-12: A friend on Facebook reported in with Benjamin Bryan’s blog about installing a Xeon E3 in the HP Microserver Gen 8. This may be the best reason to go with the low-end G1610T model.

2014-01-14: Greg Schulz (@storageio on Twitter) has a new post today on some of his recent discoveries and acquisitions. Check out Dell Inspiron 660 i660, Virtual Server Diamond in the rough? for a surprising choice for virtualization.

If you’ve written a blog post about sub-$1k home lab servers, feel free to let me know and I’ll try to get you added to this list. I’m happy to exchange links and spread the joy of home lab adventures.

Pitfalls of an Adventurous Laptop Purchase

Pitfall!_Coverart

I’ve gone through a lot of laptops in the last 15 years. A LOT. Today I have about 20 usable ones and a few for parts.

There’s a definite benefit to going with one of the big names. One is consistency of chargers. For Dell, for example, you can use the same charger from the newest E6530 all the way back to a D400. Thinkpads tend to have reverse compatibility over a long span as well.

And I really like removable batteries. The C6xx and C8xx series from Dell supported two modular bay batteries, one dedicated battery slot and one battery/drive hybrid, and you can swap one out without shutting down. On the larger C8xx series you could even have a fixed-bay optical drive and two batteries. The D830 has one standard battery, but like the C8xx you can pull the modular bay drive and put a battery in its place. I have three of those batteries, and the bulk battery charger for them even.

I’m also a big fan of high-resolution displays, from the 1600×1200 days (C840/I8200 in 2001-2002) to 1920×1200 (D830 in 2008ish?) to today’s 1920×1080. I’ve been willing to take a heavier laptop to get that screen real estate. Four ssh terminals at a time plus a browser or two and two IM clients is not uncommon for me.

But for my last laptop purchase, i decided to risk losing the consistent power supplies, the expandable power, and maybe even some of the display resolution… and consider an Ultrabook. Sure, I miss the quad-core 8-thread 16GB system, but I found I wasn’t doing much virtualization on the laptop. And I was starting to get sore shoulders from the laptop bag. So I scaled it back.

Introducing the ASUS Zenbook UX32VD

ux32vd-500-asusI ended up with the ASUS Zenbook UX32VD, which gave me a dual-core i7 Ivy Bridge processor, a 1920×1080 IPS display (13.3″, which took some getting used to) with discrete NVidia graphics in addition to onboard Intel graphics, USB3 and HDMI built in, and a pretty good battery life estimated at 4-5 hours. I went with Windows 7 since there’s no touchscreen on this model, and I didn’t want to bother with the “upgrade” just yet.

Most people who see it mistake it for a Macbook Air, until they see the color (more of a champagne than the plain brushed aluminum that Apple is fond of) and, of course, the ASUS name on the back. If they’d seen the price tag, they’d know it wasn’t an Apple device as well.

It had two additional features that were much harder to find on an Ultrabook, and those are easy RAM and disk upgrade options.  The factory configuration was 4GB of RAM (2GB onboard, 2GB SODIMM), and a 500GB 5400rpm hard drive (with 24GB iSSD to cache), which is probably more than most casual users need… but I want to run more than the bare minimum. In about 15 minutes or so, I upgraded to 10GB of RAM (via the Corsair Vengeance 1600MHz SODIMM) and a 500GB Samsung 840 SSD.

In terms of performance, I have no complaints. We’re talking instant-on sleep mode that goes into sleep in about 7 seconds, comes back in 2 seconds, and a full reboot for Windows 7 in under 10 seconds most of the time. Apps run fast, USB3 peripherals are snappy too, and I don’t dread Windows Update reboots anymore.

But the downside to this laptop was power expansion. There was no easy story for third party AC adapters or for external batteries. And as I’ve learned with my conference travel this year, you can’t count on easy access to outlets to plug in, or enough time between sessions to charge up.

First, the AC adapters

The stock ASUS AC adapter is pretty cool, a Macbook-adapter-sized wall brick (but in black, and with the Windows COA on the original one), and the tip that goes into the laptop snaps into place quite firmly and has an amber/green charge indicator light on it. But I had trouble finding a third party charger, and the ASUS one was selling for $60+ at the time.

I’ve been a fan of iGo’s universal adapters since the Juice, and have 3-4 of their Green line now (mostly because I keep misplacing the bits). There was no listed iGo tip for the Zenbook–the third machine I’ve found that iGo doesn’t support.

I tried an alternative model of ASUS charger for $15 off eBay, and it worked as long as I held the connector in place. It has a USB port for phone charging, like the iGo, so if it had worked it would’ve been great. But it didn’t have that snap or even the right fit.

AC Adapter: Problem Solved

It turns out that iGo’s 712 bit (which works with their green line of universal adapters) works pretty well with the UX32VD. It doesn’t snap in and doesn’t directly indicate charge (although the two higher-level iGo Green chargers will shut off automatically when there’s no more power draw, which is close enough). It does stay in under normal use, so it’s my usual travel option now. (Thanks to “mykie” on Notebook Review for confirming the tip almost a year ago… iGo was kind enough to not even respond to my inquiry about a tip for the laptop).

The stock adapter is now only $40 on Amazon, which is a nice price. I bought a spare to use at my desk at work.

Second, the batteries

I’ve had a Stiger external battery for a while (bought at Central Computers, who no longer carry or support them). This 7Ah  battery has interchangeable tips for everything from Dell’s old three prong connectors (Latitude C-series and the like) to fairly modern Dell, HP, IBM/Lenovo, etc. But no bit for my new ASUS.

And while Stiger seems impossible to find on the Internet, Central has dodged or ignored inquiries for a while now about this device (even though I bought the laptop and two of the batteries from them in the first place).

Battery Problem: Status TBD

hyperjuice-60wh-3I *think* I’ve found a solution, although it’s going to cost me at least $200 to try it out.

ASUS does have a car/airplane adapter for this laptop, and a company called Hyper has a line of Mac-centric batteries that include a 12V cigar plug adapter. In theory, these should go together, but in practice, the smallest HyperJuice is $170 for 60WHr (16Ah I believe).  So I’m looking at another $60 for the car adapter, $170 for the Hyperjuice, and several crossed appendages to hope that they work.

So where do we go from here?

Well, since I started writing this entry, I found that there’s an air/auto input cable for the iGo Green line, which should be under $20. I think this will be a nominally more sane way to get DC input for my existing power adapters. I also found the $40 “Laptop Travel Charger” which includes the DC cable, and is in stock at the Fry’s 3 blocks from work.  Remember what I said about lost tips?

And since Amazon carries the HyperJuice batteries, I may try one out once I get the DC cable for my power adapter. If it doesn’t work, it goes back. If it does work, we’ll have to see.

Since Hyper is local here in the Bay Area, I’m reaching out to them to see if they can part with an eval battery, or let me stop by their site with my adapter and laptop and try it out in person. If it is a viable option, I expect a few other PC users would consider an extra investment to have killer battery life even without the Apple logo on their screen back.

I’ll keep you all posted. Let me know if you have any ideas or success stories in the comments below.

Rough cut: HP Moonshot and CEO Meg Whitman at Nth Symposium 2013

I gotta say the withdrawal symptoms from daily Disneyland visits are getting milder, but I’m home from a week in Anaheim for HP Storage Tech Day  and Nth Generation’s 13th Symposium. If you didn’t see it, my preview was posted last month here on rsts11.

I’ll have some more detailed thoughts, including at least one topic that I hadn’t really expected to provoke so much thought, in the next few days. But I wanted to touch on two of the highlights from the Symposium while they’re fresh in my mind.

Disclaimer: Travel to HP Storage Tech Day/Nth Generation Symposium was paid for by HP; however, no monetary compensation is expected nor received for the content that is written in this blog.

Quick Overview of Nth Symposium

Nth Symposium is an annual partner and customer summit held by Nth Generation, the leading HP channel partner in southern California. They’ve done this thirteen times now, bringing customer technologists and executives together with HP and partner representatives for a very productive event. It’s free to qualified IT professionals, so I’d suggest checking it out next year if you are in the area.

Two of the three Nth Symposium keynotes were by execs I’ve worked for before. I was farther down the org chart from (now HP CEO) Meg Whitman when I was at the shopping.com division of eBay in 2006, but she gave the executive welcome at my new hire orientation. I reported to a VP at 3PAR who reported directly to (now HP Storage VP/GM) David Scott back in 2001. I knew both would be very impressive speakers for a keynote.

HP CEO Meg Whitman

HP CEO Meg Whitman (not channeling Clint Eastwood, don't worry)

HP CEO Meg Whitman (not channeling Clint Eastwood, don’t worry)

In a definite score for Nth Generation, they convinced Meg Whitman, president and CEO of HP, to give the headline keynote at this year’s symposium.

Whitman’s ability to know on a detailed level, communicate, and see the path forward for a hugely disparate business that probably seems like it’s going that-a-way at full speed in every direction is impressive.

The high level overview of the company’s direction, and the “New Style of IT,”  was to be expected, but her willingness and ability to field unstaged questions from the audience and respond to them in an honest and aware way was what really impressed me.

“Don’t be shy, remember, I ran for public office.”
–Meg Whitman

The three questions I remember involved cross-border ordering and SKU simplification (so that you can easily order the same model for delivery to multiple countries), support cohesiveness and contactability (and responsibility), and the morass that is hp.com.

Fellow blogger John Obeto was set up for a question when Meg called out Nigeria as one of the countries that would not see SKU simplification this year. But she acknowledged that the complexity was counterproductive, and that the company is already working to solve the problems for multinational customers.

Another attendee mentioned the challenges of finding the right contact for support, especially (as I recall) when multiple product lines are involved, or when your contacts at HP leave the company. Having had my HP account manager leave after my first order a couple of jobs ago, and having had her replacements actively and effectively lose my followup business in the months that followed, I know what a pain this can be.

Meg acknowledged the problem as a significant one, suggested using a partner or VAR as an aggregator for contacts within HP (since VARs would have more access to experts and resources within the HP organization), and concluded by offering her personal email address and committing to help until other paths are finalized.

But back to John, who came up to the microphone to decry the exclusion of Nigeria for the 2013 SKU project, and to mention something that probably everyone who has tried to used the HP site for anything but B2C e-commerce already knows… that hp.com is pretty difficult to navigate. Meg once again acknowledged the problem–see a pattern here?–and said that they were working on the business-to-business (B2B) and business-to-consumer (B2C) sites separately. One is already under a substantial reorganization, and the other will follow as soon as practical.

In general, I got the sense of Meg Whitman as a CEO being not entirely unlike the (parody) President Jimmy Carter’s fireside chat from Saturday Night Live in the late 70s. I wouldn’t ask her about acid experiences, but it seemed if you asked her about something even several levels down in the chain that was affecting customers, she’d know what was going on and be able to respond to it (or be willing to take the question on and find an answer).

The Moonshot heard round the worldnth moonshot on stage

On the topic of hp.com, Paul Santeler put further time into the discussion of Moonshot in the talk that followed Meg’s keynote, but as I recall Meg also made the bullet point on Moonshot that hp.com now runs on Moonshot rather than a huge farm of servers.

To be specific, they’re using about 720 watts of power to run the whole site. Think about that… as she suggested, you probably use more power on lighting in your home than they do to run a large enterprise web site with support, e-commerce, marketing, and all sorts of other content. (Unless you’ve gone green–I think I’m at a bit under 700W between all the lighting in my home thanks to CFL bulbs, but steady-state power rating for these power supplies is 653W so they win.)

Moonshot is a sub-5U chassis that contains up to 45 server “cartridges” running the Intel Atom S1260 at 2GHz. The cartridge is a bit larger than a Kindle Fire and sports an 8GB ECC dimm, dual gigabit Ethernet (through a central switching module pair), and a single 2.5″ laptop-style hard drive that can be 500GB or 1TB of 7200rpm spinning disk a 200GB MLC SSD.

The 45G switching modules live in the center of the chassis, and the two 6SFP uplink modules give 6 1GBE/10GBE uplinks each via SFP+ connectors. Standard configuration gives you one switch module and one uplink module; the redundancy option is a custom configuration. A 40GBE module is coming soon. The systems are managed via iLO Chassis Management, and multiple systems can be daisy-chained.

If you’d seen the Seamicro systems circa 2009-2010, the Moonshot will seem like at least an evolutionary development from that concept. The first times I spoke with Seamicro about their 10U chassis, I asked about a smaller system, around 4U, with fewer than the stock 64 systems. Moonshot gives nearly the capacity of that 10U system, 40% more system RAM, dedicated per-system storage,  a third the footprint, and a lower power draw.

There are other cartridges coming, including an 8-core 32GB cassette (good for thin virtualization) and a DSP-targeted cassette (voice processing and so forth, running on ARM), so it shouldn’t be a one-trick pony platform. It won’t replace all rackmount and conventional blade servers, but hyperscale is likely to fill a few niches and simplify management and scalability.

So where do we go from here?

I’ve been a fan of 3PAR’s “Utility Storage” platform since I joined the company in 2001. (They’re now buzzwording around Polymorphic Storage which is also cool.)

One thing I asked about often during my time on Technology Drive in 2001-2002 was a smaller starting point for the InServ platform. With the E and F series, they made some steps in that direction, and I bought an E200 for high performance storage at Trulia a few years ago. But with their new 7200 model, they go even farther into the realm of possibility with a starting list price around $25k.

I’ll be bringing you some details on their platform and enhancements in the next week. I’ll also be looking at the comparison between utility computing platforms from HP and Cisco, a topic that was featured in one of the second tier keynotes.

Stay tuned, and wish me luck on the recovery from convention plague if you don’t mind.

A quick word on VAAI and FreeNAS/TrueNAS

[I have a lot of stuff in my head to tell you all about, but I also have a thousand square feet of inventory and storage to move an average of two miles this weekend… so keep an eye out for more lengthy posts coming later in the month. ]

I’m helping a friend’s startup get some infrastructure built, and one of the things I’m looking at is shoring up their VMware environment. They’re not ready for any of the common sub-six-letter names that usually come up for a vSphere storage platform yet… even a Celerra is overkill for five developers, I’d have to say.

So I was looking into VAAI support on the TrueNAS appliances from iXsystems (and of course FreeNAS itself). The first three search results I found were actually this blog, some cached twitter roll comments where I said I didn’t know if TrueNAS/FreeNAS supported VAAI.

truenas-search-rsts11

Well, I got it on good authority this afternoon that VAAI support is in the plans for FreeNAS over at iXsystems. There’s no current date for when it will be released, but they’ve jumped through a number of flaming hoops already to get ready, and will be keeping me (and you all by extension) up to date on progress.

For those of you using FreeNAS in your home lab, this probably won’t stop you from using it as shared storage for your VMware lab environment, or anything else for that matter. But if you’re considering TrueNAS for VMware storage, or need the full-on VAAI feature set, this will make things smoother in the foreseeable future.

And an unsolicited and uncompensated plug here (although if they want help testing a FreeNAS Mini Plus, they know where to find me)…

iXsystems are a hardware vendor who are a good friend to open source. They’re probably best known for their support of FreeBSD and FreeNAS (and Jordan Hubbard is joining them as new CTO this month), but they also sponsor Slackware, and make some cool storage appliances as well as a line of servers that come with open software history and support behind them. They’ve long been a friend of BayLISA, the Silicon Valley sysadmin group I’m involved with, as well as the Bay Area FreeBSD User Group and other organizations. Check them out if you’re looking for servers, workstations, or software.

Now back to moving… why did I need a Centillion 100 again? Anybody?

[PS: Welcome to all of you who followed iXsystems over here to my blog. For full disclosure, I am currently president of BayLISA, the Silicon Valley system administration user group, but this stuff is mostly written as Robert Novak, blogger, rather than Robert Novak, BayLISA cheerleader-in-chief.]