NUC NUC (Who’s there?) VMware lab…

nuc-outside

VMware lab who? VMware lab in pocket-size format!

So in our last installment, I found out that I can upgrade my Shuttle SH67H ESXi servers to support Ivy Bridge processors. If you want to read more about that, feel free to visit my Compact VMware Server At Home post from Winter 2012, and my Upgrading my home VMware lab with Ivy Bridge post from Spring 2013.

The replacement boards came in from Shuttle, and they’ll be going back into the chassis. But as you may have seen at the end of the last post, I discovered the Intel Next Unit of Computing server line. The NUC line current includes three models.

  • DC3217IYE – i3-3217U processor at 1.8 GHZ dual core with 3MB cache), dual HDMI, Gigabit Ethernet at $293 (pictured)
  • DC3217BY – i3-3217U processor, single HDMI, single Thunderbolt,  – no native Ethernet – at $323
  • DCCP847DYE– Celeron 847 (1.1 GHZ dual core with 2MB L3 cache, dual HDMI, Gigabit Ethernet at $172
    (Prices are estimated list from Intel’s site–probably cheaper by a buck or ten at Amazon, Fry’s, Central Computer, or your favorite retailer. Feel free to use my links and help me buy the next one. 🙂 )

nuc-inside

All three have three USB 2.0 ports outside (one front, two rear), as well as two USB headers inside, conceivably useful for a USB flash drive or flash reader. They also have an mSATA-capable Mini-PCIe slot as well as a short mini-PCIe slot suitable for a WiFi/Bluetooth card. And there are two DDR3 SODIMM slots, supporting a reported 16GB of RAM (the processor supports 32GB, but the board/kit do not mention this). They all include VT-x with EPT.

I don’t see the Celeron being that useful for virtualization labs, but these are rather multifunctional for a little 4″ square computer. Imagine putting a broadband modem (3G/4G/Wimax) inside for reasonably potent portable kiosk purposes (VESA mount kit included). A card reader and a DVD burner for saving and sharing (and even editing) photos. Intel’s WiDi wireless display technology is supported as well, if you have a suitable receiver. Or use it with a portable projector for presentations on the go (no more fiddling with display adapters for presentations at your meetings!).

But we’re talking about a VMware lab here.

And let me get this out of the way… this was one of the coolest features of the NUC.


That’s correct, the box has its own sound effects.

Let’s get this party started…

Those of you of my era and inclinations may remember when KITT’s brain was removed and placed in a non-vehicle form factor on the original Knight Rider tv series. When I got ready to RMA my Shuttle motherboards, I was thinking about this sort of effect for a couple of VMs on the in-service ESXi server that was about to have its board sent to southern California. And that’s what I had to do. I couldn’t quite miniaturize the server Orac-style, but  that thought had crossed my mind as well.

So I picked up the DC327IYE unit at Fry’s, got an mSATA 64GB card (Crucial m4 64GB CT064M4SSD3) and a spare low profile USB flash drive (Patriot Autobahn 8GB (PSF8GLSABUSB)) at Central Computers, and took a Corsair 16GB DDR3 Kit (CMSO16GX3M2A1333C9) from my stock. Assembling it took only a few minutes and a jeweler’s screwdriver, and then I was ready to implement ESXi.

I backed up the VMs from the original system using vSphere Client, so that I could re-deploy them later to the NUC. Someday I’ll get Veeam or something better going to actively back up and replicate my VMs, but for the limited persistent use of my cluster (cacti and mediawiki VMs) this was sufficient.

One gotcha: Fixing the NUC network…

I originally tried reusing the 4GB usb drive my existing server was booting from, but it didn’t recognize the Ethernet interface. I installed a fresh 5.0u2 on a new flash drive, and still no luck. I found a post at tekhead.org that detailed slipstreaming the new driver into ESXi’s install ISO. I did so, installed again, and was up and running.

I did have to create a new datastore on the mSATA card — my original server had used a small Vertex 2 SSD from OCZ, which obviously wouldn’t work here. But I was able to upload my backed up OVF files and bring up the VMs very quickly.

And one warning I’ll bring up is that the unit does get a bit warm, and if you use a metal USB flash drive, it will get hot to the touch. My original ESXi lab box used a plastic-shelled USB drive, and I’m inclined to go back to that.

What’s next, Robert?

My next step is going to be bringing central storage back. There is a new HP MicroServer N54L on the market, but my N40L should be sufficient for now–put the 16GB upgrade in and load it up with drives. As those of you who saw my lab post last year know, it was running FreeNAS 8, but I’m thinking about cutting over to NexentaStor Community Edition.

I’ve taken the original Shuttle box and replaced a mid-tower PC with it for my primary desktop. I will probably set the other one up with a Linux of some sort.

And in a week or so I’ll grab a second NUC and build it out as a second cluster machine for the ESXi lab. All five of them are slated to go into my new EXPEDIT shelving thingie in the home office, and I’ll bring you the latest on these adventures as soon as they happen.

Upgrading my home VMware lab (part 1: Ivy Bridge) #rsts11

My most popular post on rsts11 has been my compact VMware server at home post. Thanks to Chris Wahl mentioning me on the VMware forums, and linking from his lab post, I see a dozen visits or more a day to that page.

Imitation is the sincerest form of laziness^wflattery

I have to admit that I’ve been a follower in my use of intriguing lab environments. I got the vTARDIS idea from Simon Gallagher, and built a version of it at work at my last job on a Dell Core 2 Quad workstation under my desk. Then I saw Kendrick Coleman tweet about this new SH67H3 from Shuttle that supported 32GB of non-registered RAM… bought one and put 32GB and an i7-2600S processor into it, as mentioned in the “server at home” post mentioned above.

Now as you may know, the i7-2600 series processors are now a generation behind. Sandy Bridge gave way to Ivy Bridge (the i7-3×00 processors) which are currently easily found at retail. But… SH67H3 v1.0 motherboards don’t support Ivy Bridge. And that’s what was shipping when I bought mine in early 2012.

I found an unbelievable deal on a second SH67H3 open (missing) box at Fry’s in February 2013… let’s just say I spent more on a basic Pentium chip to test it with than I did on the chassis itself. But alas, the second one also had a v1.0 motherboard.

Let’s make the ivy (bridge) grow!

I found sources on the Internets that said a v2.0 board supporting Ivy Bridge was out. I further discovered that Shuttle would trade in your v1.0 board for a v2.0 board for $40. Instructions here at Cinlor Tech’s blog if you’re interested in doing this yourself. Note that you can request the case number through Shuttle’s web-email portal if you prefer this to calling. That’s what I did.

sh67 corpsesI shipped off my two boards in a medium Priority Mail box to Shuttle on the 26th. On the 29th I got confirmation of the return shipment. They should be at my door on April 2nd. I’ll be reinstalling them, and at some point upgrading to the i7-3770s processors on both.

Waitasec, 2600 “S”? 3770 “S”? What’s this all about, then?

Yes, that’s correct. I chose to go with a low power version of the i7-2600 processor a year and change ago. The i7-2600s has a lower base speed than the 2600 or 2600k (unlocked version), 2.8ghz vs 3.4ghz. All three support turbo boost to 3.8ghz though. And the i7-2600s is 65W where the others are 95W.

(Here’s a comparison chart of the three i7-2600 and three i7-3770 processor options via Intel, if you’re curious.)

Other noteworthy differences are on the 2600k, which costs $20 more, but does not support VT-d (directed I/O), vPro management features, or Trusted Execution. VT-d is the only feature of particular concern when you’re building your virtualization lab though. (I’ll admit the VT-d was an accidental discovery–I chose the 2600s more for power savings than anything else). If you’re building a desktop, the “K” model has HD3000 graphics vs HD2000 for the other two chips, by the way.

Now that I’m building a second box, I find that my usual local retail sources don’t have the i7-2600s in stock anymore. I could order one on eBay or maybe find it at Fry’s, but for about the same price I could get the Ivy Bridge version and be slightly future-proofed. Once again, the “S” is the way to go.

The 3770 series run at 3.1ghz (“S”), 3.4ghz (3770), and 3.5ghz (“K”) base speeds, all turbo capable to 3.9ghz. The “S” processor is 65W again, vs only 77W for the other two chips. They all have Intel’s HD4000 integrated graphics and the newer PCIe 3.0 support. They support 1600mhz RAM speeds, vs 1333 top for the previous generation. The “K” processor lacks VT-d, vPro, and Trusted Execution, but does have a nearly $40 premium over the other two chips.

All six of these chips have VT-x including extended page tables (EPT/SLAT), hyperthreading, and enhanced SpeedStep. And they’re all 4 core/8 thread/32gb RAM capable processors that make a great basis for a virtualization environment.

nuc-scaleSo what’s next, Robert?

Well, with two matching machines, I’ll be basically starting from scratch. Time to upgrade the N40L Microserver NAS box to 16GB (Thanks Chris for finding this too!) and probably splitting off a distinct physical storage network for that purpose.

But now, thanks to Marco Broeken’s recent lab rebuild, I’ve been introduced to Intel’s Next Unit of Computing (NUC), so tune in soon for my experience with my first NUC system. Sneak peek of the ESXi splash screen and the actual unit here… stay tuned!

rsts11: Xangati’s latest VI/VDI dashboard, now featuring capacity planning!

Xangati has announced a new release of their performance management platform for VMware’s Virtual Infrastructure and Virtual Desktop Infrastructure platforms as well as Citrix’s VDI. They’ve improved their dashboards, added more performance monitoring around bursts and storms, and added capacity planning features to help VI/VDI admins move forward with expanding virtualization environments and keep some of their hair in the process.

Earlier this month, I sat down over a Webex with David Messina [1], Xangati’s VP of Product Management, to look at the new release of Xangati‘s VDI Dashboard in advance of today’s announcement. There’s some cool stuff coming out today, and even more coming in the second half of the year.

I’m going to give a quick overview, and then focus here on one really cool thing that jumped out at me from the presentation and demo. Some of my fellow Tech Field Day alums will most likely be covering other details, and I’m especially looking forward to Chris Wahl’s review, as I know he’s been using Xangati Management Dashboard (XMD) in his lab for a while now.

While you’re reading this, go on over to www.xangati.com in another tab and download your own eval (or free single-server edition) and check it out in your own environment. You can also see their press release and a fun blog post that came out this morning. And as a special bonus, keep reading for a chance to meet Xangati and see their latest product live and in person later this week (if you’re in the San Francisco Bay Area).

Why do I need XMD? I have vCenter!

If you have a single vSphere server with local disk, and everyone uses VNC to get into their VMs, then the answer is “you might not need it.” But if you have external dependencies like networks, shared storage, variable load patterns, multiple VI admins creating and provisioning and resizing VMs without your blessing, or (gasp) VDI, you owe it to yourself to give XMD a look. You may make up the costs of deployment during your trial.

One thing vCenter doesn’t necessarily show you, for example, is bursts or storms of resource demand. Anyone who’s ever set up Cacti or MRTG or the like for metrics has found that bursty traffic definitely shows up on user experience, but most practical metrics tools will even out the peaks and you find yourself telling your end user “we’re only seeing 50 IOPs” when that twenty seconds of 500 IOPs may have been causing a serious impact. Or worse, you find yourself passing that trouble ticket off to the storage guys who see the same, and they tell you and your user two days later. Meanwhile, the problem still happens, and you’ve got a grumpier user.

Xangati focuses on fine-grained and broad measurements across your virtualization platform. And even better, they can tie together those bursts to help you find what’s cause and effect, and what’s just the result of troublesome trends in your environment. You can start from the Dashboard and find anything that their software has detected as out of the ordinary, and see the general pulse of your monitored environment as well.

You’ll also be able to dig into linked issues, linked metrics, and look at the measurements immediately surrounding an alert or issue. Maybe the first thing you saw wasn’t the cause, but just a slow decaying effect. XMD will help you track that down even if the issue (or the machine) is gone. Sure, it doesn’t have the 90s retro feeling of just having zoom in/zoom out on a graph, but you’ll get over the loss of greybeard cred when your NOC doesn’t call you as often.

Do These Pants Make My VM Look Too Small?

If you’re the only VI admin in your environment, and you provision all the VMs and storage, you’re probably tired by now. But you probably have a firm grasp on what your environment looks like and where it’s growing and going.

More likely though, you have a few people adding, reconfiguring, and removing VMs, storage, and maybe even network links or cluster nodes. And it’s not inconceivable that you have a spreadsheet somewhere tracking what’s where. Hopefully you’re keeping enough detail to know how fast your datastores are growing, how much of your network links are utilized, and how far in advance you need to boost your environment to avoid affecting users and products. No? Didn’t think so.

The new feature that really jumped out at me from the latest Xangati demo was their new capacity planning feature. This is already a very useful feature for a first iteration of the component, and you don’t pay anything extra for it once you’ve got the XMD in place and licensed. You set your thresholds of concern (maybe based on hardware acquisition turnaround time, comfort level, or how fast you expect Double Space or Stacker to kick in on your storage server), and XMD watches capacity, utilization, and basic trending for the resources.

For now, as you see in the screen shot above, the focus is on objects. You can see a particular resource and metric and see status of that pairing on a per-day basis. This won’t save you from the 2:30pm “let’s fill up all the VM disks” party that your favorite developer with root access decides to do, but as code creep, log bloat, and memory leaks work their way into your nightmares, XMD will warn you and give you some time to address the issue. They’re already planning to do capacity monitoring on a cluster or resource pool basis, as I recall, so it will become an even stronger tool in your arsenal in the future.

One item I discussed with Xangati as future improvements on this feature is what I’ll call trend trending. Look at line 15 in the chart above. Now back at me. Now back at line 15. I’m not on a horse. But line 15, a virtual desktop, sees increasing CPU usage and XMD tells us we have 15 days at current trend before we cross our 80% threshold (set in the dialog box at left). At the edge of the screen we see that we’ll probably reach 100% capacity on June 7th, about a week later.

Let’s pretend that’s storage utilization instead, just for the sake of argument. What happens if something changes drastically, say, Thursday night. Instead of that gradual growth of your log files and core dumps aiming for 15 days from now, the developers add a new feature that dumps core every 15 minutes, and we  find ourselves looking at threshold in 3 days, or worse, 100% capacity in 7 days. I’d like to see some advanced (and probably optional/granular) trend monitoring so that I’d get a special notification saying “not only are you going down, son, you’re going down faster than you were yesterday.”

I’d be more interested in storage/memory/network utilization on guests, and cpu/memory on hosts. If you’re overprovisioning your VDI resources, you may want more frequent info since one user discovering bittorrent or bitcoin can cause some pain (if you don’t have them blocked).

So where do we go from here?

I’ve only really touched on one feature of XMD, but it’s the one that means the most to me at the moment. If you’re in a VDI environment, you owe it to yourself to talk to Xangati or just get a demo set up with them. I’m not in a VDI environment, so I can’t speak to it very well, although I’ve seen that you get at least as much benefit as you do in the VI environment, but with more potential direct impact on your users in realtime.

As I get my instance of the VI Dashboard going, I’ll probably revisit the capacity planning as well as the other features, and I may get around to some of my other notes as well before the next release. But what’s coming next from Xangati, and what am I hoping for?

First of all, they’re already working on doubling their scaling to support more vCenters. More frequent trend updates for the capacity management piece, live analysis connecting to capacity management, and customizable dashboards across the product are all expected in upcoming releases, as well as more integration and interaction at the hardware level.

And, as much as I’d been hoping for XenServer integration, I am happy to see that Xangati are branching out to a second hypervisor, although it is in the form of Microsoft’s Hyper-V. You’ll see some special value from XMD in the Hyper-V world especially around storage visibility, and those of you with Technet or MAPS access will be able to test this out under existing license without the wonderful 60-day lab reinstall that VMware blesses us with.

Beyond the above, I’m hoping to see improvements in alert sensitivity and trend tracking/alerting on the capacity piece as well, although they’re not firmly carved into the roadmap yet. And I’d really love to see XenServer integration–Xangati has a good relationship with Citrix on the VDI side, so hopefully this will lead to hypervisor synergy as well.

Bonus: Come meet Xangati and Tegile and Hotlink (Oh My!)

By the way, if you’re in the San Francisco Bay Area, you can come meet Xangati (and Tegile and Hotlink) at this week’s BayLISA meeting in Mountain View. It’s free to attend (although we do ask you to RSVP so we can plan pizza and seating) and you’ll get to see a live demo of the freshest XMD around. See details at the previous link, and come join us Thursday night. As another disclaimer, I’m somewhat in charge of BayLISA these days, so it makes me feel good to see a full room, but I don’t get anything tangble if more people show up–more likely I get *less* pizza–but it’s worth it.

Credits and Disclaimer

I would like to thank Xangati for providing the screen shots in this entry… my lab isn’t quite up to providing useful trends yet, although it will be soon.

They have also provided me with a NFR/lab license for XMD which I appreciate, and am looking forward to warming up soon. However, you can test out everything I’ll be working on with the free trial (for vCenter) and the free single-server vSphere edition.

My thoughts in this piece (and in general) are not based on a free license, they’re based on what I find interesting and useful. If I weren’t excited about this technology, I would have left it to others to shout about.

[1] Most likely unrelated to the artist behind the True Blood comics and other IDW classics. And he probably gets enough Poco and “Your Mama Don’t Dance” jokes already, so I’ll spare him those for now.

Some Related Links…

Here are some write-ups on Xangati from Virtualization Field Day 2 in February:

And here are some more recent pieces around the launch:

Let me know what you think in the comments below!