A (Dell) Precision replacement for our Intel NUC desktop

I didn’t really expect to be writing another build report so soon for my primary desktop. But in October of this year, it seemed to be time for a hardware revamping for my primary home desktop.

About five months ago, I built a 7th generation core i7 Intel NUC with Optane technology to replace an older 3rd generation desktop. That system ran a dual-core, quad-thread i7 processor, 32GB of DDR4 laptop memory, a 32GB Optane drive, and a 2TB solid state hybrid drive (SSHD).

Well, after three months I still felt the pain of a dual core system more than I’d expected. And in the meantime, my brother sent me a barebones Dell Precision Tower 7910 as an early birthday present. I was a bit concerned about it at first, since it uses Xeon v4 processors and DDR4 ECC registered memory, neither of which is inexpensive. The 1300 watt power supply had me concerned as well.

I decided it would be worth rebuilding the system anyway, since I could easily sell the system if I chose not to use it, and it’d be fun to run a more modern workstation for a while if I did decide to sell it. Spoiler: I am not planning to sell, but I’ll share the build report here so you can think about the options in case this meets your needs.

T7910 2018-12-04 16.07.19

Dell Precision 7910, pictured beneath the Intel NUC desktop we built out this summer. 1U power distribution unit and 1U security appliance below for scale. Sorry, no banana.

Curious Caveat

I had written most of this post, but when I went to confirm pricing, I realized that I’m running non-registered, non-ECC RAM in this system. Despite the documentation saying UDIMMs are not supported, and Crucial’s compatibility list showing all ECC Registered RAM, the parts I’m using are unbuffered non-ECC non-registered DDR4.

This may not be an optimal configuration, but if the cost and availability work better for you, it may be worth a try. Note that you will almost certainly be unable to mix registered and unregistered DIMMs, and you won’t be able to mix LRDIMMs and regular RDIMMS.

Update [2019-07-09]: I’ve noticed that the system has been sluggish, acting like it was swapping things out to disk even though I was only using about half of the total 32GB of RAM. Flashbacks to bad Solaris configurations in 2004. I replaced the unsupported memory described above with a set of four PC4-1700R Registered ECC DDR4 RDIMMs, and so far with a little bit of use it’s back to what I’d expect from a 28-core system with plenty of memory. Still using nearly half the RAM, with Chrome alone taking 6GB, but it’s snappy and not painful to use.

That being said, let’s look at the bill of materials, and then the build decision process itself.

Sample Build Bill-Of-Materials

Links and prices are representative of eBay offers (unless otherwise noted) as of October 16, 2018 December 4, 2018, but may not still be valid by the time you read this post. Prices changed by a few percent on many of the items between October and December, 2018. As usual, purchasing through our links may result in us getting a small commission at no additional cost to you.

The total price today for the system build would be just under $1800. You could shave a bit off the cost by eliminating the NVME storage adapter and drives; there will be four 2.5/3.5″ drive bays that will take any SATA or SAS drives you may want to use.

Building the Precision 7910 Tower system

The easy way to get this system is to find one on eBay that’s preconfigured. The 7910 comes in the tower style (roughly 4U but designed with workstation/deskside system trim) as well as a 2U rackmount server model.

Two things to note from the system you acquire:

Service tag: Dell’s systems have a service tag (and corresponding express service code) which will let you access the original shipping configuration, ship date, and warranty (if any) remaining on the system. This will give you an idea of what’s “missing” as well as what is supported on the system in question.

In my situation, the system originally had dual E5-2697v4 processors, 8x 8GB DDR4 2400MHz dimms, and a pair of 256GB PM871 2.5″ SSDs from Samsung.

Operating system: If your chassis shipped with a Windows 10 license, you can recover the OS media from Dell or Microsoft directly and install without a license key. My system shipped with Windows 10 Pro 64-bit, so I used install media for that OS and simply activated from the OS once it was installed and booted.

Core system

As I write this post, I see a rackmount model listed for $500 or best offer with free shipping. There’s a tower model for $600 or best offer, also with free shipping. These both include power supplies, heatsink/fan assemblies, and the barebones system itself.

The fully-configured models with one processor and a bootable amount of RAM start at $750. If you want to get the system up and running, and upgrade it later, this is a reasonable way to do it.

Processor module options

The system supports Xeon E2600-series v4 processors, up to at least the 145 watt E5-2697v4 18-core processor. That processor will set you back $579 for an engineering sample from overseas, or $2000 or more for a retail box CPU from the US. For an economy/POHO build, that was a bit high in power draw and dollar draw, so I looked at the range of v4 processors from Intel’s ARK database.

The bare minimum I was willing to consider was an E5-2620v4, which is the 8-core 2.1-3.0GHz processor. It’s an 85W part, so I’d be calculating 170W (or about 4x the total power draw of the NUC system) to run two of that processor. I also looked at the E5-2650v4 (12 cores, 2.2-2.9GHz, 105W) and the E5-2650Lv4 (14 cores, 1.7-2.5GHz, 65W). Since core count was more important than base frequency, and the price for engineering sample parts was more reasonable (under $200 shipped for the 2650L, $200-250 for the 2650, engineering samples from China), I found the E5-2650Lv4 to be a good match. (See the ARK compare page for these four processors here.)

One interesting note with regard to engineering sample CPUs is that they may result in certain Dell software not recognizing the system as a Dell computer. The “Dell Precision Optimizer” fails to install, for example, because the configuration isn’t recognized. Some benchmark software (more on that later) may also get confused.

Memory module options

The 7910 has 8-12 DIMM slots per processor (8 for the tower model, 12 for the rackmount), and you need at least one DIMM per CPU to get going. If you want to use all of the PCIe slots, you’ll need a second processor as well. More DIMMs in more banks will give you better performance, but of course going with 4GB DIMMs (which are about $40 each at this time) will limit you to 64GB.

I accidentally ordered a pair of 4GB Samsung M378A5143EB1-CPB “desktop” DDR4 DIMMs for $80 shipped, and then another pair for about $75 shipped. The DIMMs worked fine, and I ordered another four DIMMs to get up to 32GB, which I’m running with today.

I tried a pair of HP-labeled Micron registered DIMMs but the system refused to boot with them installed (yellow power LED blinking with the memory failure pattern). The seller handled the return and I spent the money on more of the originally-purchased part.

As noted above, this system specifies ECC Registered DDR4 RAM (or optionally Load-Reduced memory, labeled as LRDIMM). You may be able to use unbuffered memory, as I did, but it will likely not be officially supported. And that may be conditioned on a board revision or BIOS version.

Storage options

The system I got has four 3.5″ pluggable drives on the onboard SAS controller (LSI Avago 3008), along with a SATA DVD drive and a laptop-size drive bay that go to onboard SATA ports. At the moment I have a pair of 3TB scratch drives for storing my Dropbox and other cloud backup folders.

I had originally installed Windows 10 onto a midgrade Lite-On 512GB SSD in one of those bays, but I found a better option that let me take advantage of some spare NVME drives I had around the house.

Dell offered an Ultra Speed Storage Adapter Card (model 80G5N) that includes four NVMe slots and a fan with shroud for cooling. It sits in an x16 PCIe slot and takes advantage of “PCIe Bifurcation,” which means the system board can route lanes separately to different devices on a single connector. In this case, the x16 slot feeds 4 PCIe lanes to each of the four NVMe m.2 2280 (or smaller) drives on the card, and they are bootable from BIOS so you can run without any hard drives in the drive bays at all.

The Ultra Speed Storage Adapter can be had for under $200 on eBay, and requires no configuration beyond installing one or more NVMe drives. I installed an inexpensive ADATA 1TB NVMe drive as my boot device, and put some smaller drives in the remaining slots for future use.

Remember that NVMe generally doesn’t have a RAID implementation, as it’s living on the PCIe bus and not a RAID controller, so if you want to aggregate multiple drives, you’ll need to use the functionality of your OS to do so.

Network Options

The 7910 tower system includes two Intel Gigabit Ethernet ports with RJ45 connectors, ready to connect to most home and office networks. However, since my home network features a Meraki MS42P switch with four 10 Gigabit Ethernet ports, I decided to take advantage of that capability and install an Intel X520-DA1 network card I already had on hand.

It is connected to the switch with a 10Gtek 1 meter SFP+ passive copper cable, meaning I don’t have to worry about compatibility with the Intel optics. If you want to run fiber, you can get the approved Intel optical modules, or look for a third party like 10Gtek or StarTech who make coded compatible SFP+ modules. I tried using a random SFP module, and the Windows driver failed to start with it connected.

Other details

I had a 6GB NVIDIA GTX 1050 Ti video card handy. Not the greatest cryptocurrency mining card, but good enough for a Dell U3011 4k desktop display. (Note to self: At long as you install the NVIDIA drivers, that is… although even at 1920×1200 it looked acceptable.)

There is an internal USB port for those of you considering a hypervisor or appliance build. You could also use this slot for a rescue boot drive, as long as your boot order is set correctly. I use the free Macrium Reflect backup and recovery software, and might install the WinPE-based recovery drive inside and back up to the SATA/SAS drives automatically.

So where do we go from here?

I wrote the first draft of this post in October and set it aside. Since then the T7910 has been my daily driver and has worked pretty well. I can’t say I’ve really taxed this configuration, although some video work with Adobe CC Rush did use a fair bit of the CPU cores (something I had in mind when building the system). I am tempted to try expanding the memory a bit more, but other than Google Chrome eating RAM, I haven’t run short so far.

What would you do with a 28-core Xeon v4 desktop system? Any configuration changes you’d make? Share in the comments, or join the conversation on Twitter and Facebook.

Note: The photo on this post is the T7910, with the NUC sitting on top of it for scale.

1 thought on “A (Dell) Precision replacement for our Intel NUC desktop

  1. Pingback: So you think you want to farm chia? | rsts11 – Robert Novak on system administration

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.