Building the Intel NUC Chia Plotter

In an earlier post, I shared a design for a Chia plotter/farmer based around the Intel NUC NUC10i7FNH tiny computer.

Today I built that machine, and it’s running its first plot as I type this.

If you landed here, you might be interested in my other recent Chia posts:

Pricing disclaimer: All prices, availability info, and links are accurate as of the writing of this article (May 3, 2021) unless otherwise noted. Prices vary from day to day and availability does too. Use this info with a grain of salt especially if you are reading this in 2022 or beyond.

Here’s a quick rundown of what was involved in the process.

Shopping List

Feel free to shop in your preferred venues online or locally, or if you already have components, use them. These links are Amazon affiliate links, and if you use them, I get a few bucks to go toward my next hardware adventure. (I bought my NUC and RAM from Central Computer, a local computer store in Silicon Valley, and the NVMe drive came from Amazon.)

Base computer – NUC10i7FNH1 currently $570 at Amazon. You want the i7, and you want the FNH which is the “high” case that holds a 2.5″ drive as well as the m.2.

RAM – 32GB (2x16GB) DDR4 2666 or better SODIMM. Crucial 16GBx2 kit around $182 at Amazon. You can install 64GB, but you probably don’t need it with this processor.

Boot drive – I used a Samsung PM851 that’s not available on Amazon at the moment. Any 2.5″ SATA drive will do, even a HDD. Amazon has the WD Blue SSD 250GB for $45 or 500GB for $60. If you have something else on hand that’s at least 120GB, go ahead and use it, or if you want some internal plot storage, get something bigger. 

Plotting drive – 2TB Inland Premium NVMe is popular with its 3200TBW rating, about $240 on Amazon but out of stock for the next week. If you watch your drive life, you can use cheaper NVMe or even SATA m.2 storage. But check the TBW (Total Bytes Written, or Terabytes Written) and warranty for your drive and take that into account. 

OS install drive – Get a USB 3.0 drive with 16GB or more space, and use Balena Etcher or Rufus to burn Ubuntu 20.04 LTS to it.I like the Sandisk Ultra 32GB for price point and quality, about $10 at Amazon

External long-term plot/farm storage – I’ll be using an 8TB external drive in the near term, but you can use whatever you have, even NAS storage. 

Bonus: Staging disk. A user on r/chia suggested using a staging drive to copy your final plot file to, so that your plot process ends faster than if it has to be copied to slow disk. You can then automate moving the plot files to your external HDD at your leisure, and get back to plotting again up to an hour faster. For this, you can use an external USB 3.0 or better SSD like the WD My Passport SSD ($150 for 1TB), Crucial X8 ($148 for 1TB), or pretty much any SSD that will hold a batch of your plots (1TB will hold 9 plot files). You can also use a directory on your NVMe drive for this, but make sure you don’t let it fill up.

Continue reading

Rabbit Reorganization: Building low power clusters from a rabb.it door

As you saw in my 3D Printing series, after years of pondering a 3D printer, I was finally inspired to buy one when a pile of clusters came up on eBay from the defunct rabb.it video streaming service.

In this series, I’ll take you through turning a rabbit door into some useful computing resources. You can do something similar even after the clusters are sold out; a lot of people have probably bought the clusters and ended up not using them, or you can adjust the plans here to other models.

The first thing I will put out there is that these are not latest-and-greatest state-of-the-art computers. If you’re looking for a production environment or DDR4 high density memory, keep looking. But if you want an inexpensive modular cluster that’s only about 5 years out of date, there’s hope for you in here.

The original cluster

eBay seller “tryc2” has sold several hundred of these “door clusters” from rabb.it, a now-defunct video streaming service that closed up shop in mid-2019. As of this writing, they still have a couple dozen available. I call it a “door cluster” as the 42 inch by 17 inch metal plate resembles a door, and gives you an idea of the ease of manipulating and fitting the environment into your home/homelab as it is delivered.

The cluster bundle will set you back US$300, plus tax where applicable. While they’re available, you can get one at this link and I’ll get a couple of dollars in commission toward my next purchase.

The cluster includes 10 Intel NUC quad-core boards (mine were NUC5PPYB quad-core Pentium; my friend Stephen Foskett got some that were newer NUC6CAYB Celeron boards which took more RAM). These boards feature one DDR3L SODIMM slot (max of 8GB), one SATA port with a non-standard power connector (more on this later), Gigabit Ethernet, HDMI out with a headless adapter (to fool the computer into activating the GPU despite no monitor being connected), four USB ports, and a tiny m.2 slot originally intended for wireless adapters.

In the center of the “door” are five NVIDIA Jetson TK1 boards. These were NVIDIA’s first low-end foray into GPU development, sold to let individuals try out machine learning and GPU computing. There are much newer units, including the Jetson Nano (whose 2GB version is coming this month), if you really want modern AI and GPU testing gear, but these are reasonably capable machines that will run Ubuntu 14 or 16 quite readily. You get 2GB of RAM and a 32GB onboard eMMC module, plus a SATA port and an SD slot as well as gigabit Ethernet.

The infrastructure for each cluster includes a quality Meanwell power supply, a distribution board assembly I haven’t unpacked yet, two automotive-style fuse blocks with power cords going to the 15 computers, and a 16 port Netgear unmanaged Gigabit Ethernet switch. With some modifications, you can run this entire cluster off one power cord and one network cord.

What’s missing?

So there is a catch to a $300 15-node cluster. The Jetson nodes are component complete, meaning they have RAM and storage. However, the NUCs are barebones, and you’ll need some form of storage and some RAM.

For the Jetson nodes, you’ll need an older Ubuntu machine and the NVIDIA Jetpack software loader. For the installation host, Ubuntu 14.04 is supported, 16.x should work, and later versions are at your own risk. You’ll also need an Ethernet connection to a network shared with your Ubuntu machine, as well as a MicroUSB connection between your Ubuntu host and the Jetson, to load the official software bundle.

For the NUCs, you’re looking at needing to add a SODIMM and some form of storage to each. I bought a bunch of 8GB SODIMMs on eBay ($28.50 each) to max out the boards. For storage, I tried USB flash drives and 16GB SD cards and had OS issues with both, so I bought the MicroSATACables NUC internal harness for each board, along with Toshiba Q Pro 128GB SATA III SSDs (these are sold out, but there’s a Samsung SM841N currently available in bulk for the same price, about $20 each).

If you do get a cluster bundle with the two-memory-slot NUC boards, you have two options beyond the above. The easy and documented option is to look for 4GB SODIMMs instead of 8GB; you may save a buck or two, or if you’re like me, you may have a box of 4GB SODIMMs from various upgrades and not have to buy anything. The other option is to update your BIOS on the NUC and try out 2x 8GB. For some uses, 16GB will be worth the cost (vSphere or other virtualization clusters). I’d suggest going with a known quantity to update the BIOS to the latest version, and then trying 2x 8GB.

One other thing you may need is a pack of spare fuses. I know they do their job, as I blew a few of them while plugging and unplugging the boards. But you may wish to have a few extras around. They’re the standard 3 amp “mini blade” fuse that can be found at auto parts stores (although my local shops tended to have one card of them, if that, on the shelf). You can also buy a 10 pack for $6.25 (Bussmann brand) , a 25 pack for about the same price (Baomain brand) or a 100 pack (Kodobo brand) for about $9.

Choose your own adventure

There are two paths to take once you have your gear collected and connected.

  1. How do we lay out the gear?
  2. What do we do with it?

I’ll look at my journey on both paths in upcoming episodes of this series. Spoiler: I’ve 3d-printed stacking plates for both the NUCs and the Jetsons, and am still working on how to mount the remaining pieces so I can e-waste the door piece. And as I write part 1, I still haven’t figured out what to do with the clusters.

Where do we go from here?

If you’ve bought one of these clusters (or more than one), feel free to chime in on the comment section and let me know what you’ve done with it. And stay tuned to this post (or @rsts11 on Twitter or Facebook) for updates on the next installments.

Money Pit: 3D Printing Part 1 – The Back Story, The Rationale, and The Assembly

This is one topic in a series of what I’m calling “money pit” projects. To be fair, it’ll be money and time pit topics, and nothing that you’d really have to get a second mortgage on your house to do… but things always get a bit out of hand.

This project is the 3D Printing project. The second part is available at First Round of Enhancements and part 3 should be out within a week. 

The Back Story

It all goes back to five or so years ago, when I bought a couple of Banana Pro single board computers from LeMaker in France.The Banana Pro was a Raspberry Pi-inspired board, but with gigabit Ethernet and external SATA on board. Great idea, but they didn’t sell as much as the RPi, so the accessory market was a lot lighter. I think there were 4 cases I found in the past 5 years, many of which were not readily available in the US.

I did order a few cases from China that had a section for the SATA drive, and stocked up on cables for the SATA drives. But I wasn’t too happy with what was out there.I found some of the 3D printer sites where people had built some cases, and thought “someday I’ll get a printer and make some cases.” I said that about every year for 4 years.

Then earlier this year, some more usable cluster kits came onto the used market from the now-defunct rabb.it startup. By “some,” I mean about a thousand of them. (Click on the photo below if you want to buy one of the kits yourself. It is an eBay partner network link but I have no association with the seller other than as a buyer of one cluster kit so far.)Single Board Computer Array with Intel NUC5PPYH and NVIDIA Jetson TK1They each contain ten NUC5PPYB quad-core pentium NUC machines and five NVIDIA Jetson TK1 dev boards. I pondered it for several months (not as long as the printer), finally bought one, and it showed up a week later. (I’ll write more about that project separately, and you can read my friend Stephen Foskett’s Pack Rat series about the rabb.it clusters here.)

About the same time, I broke down and bought a Creality Ender 3 Pro printer from my local geek shop, Central Computers. Central also stocks the Creality-branded filament for $20 per 1kg roll, and they’re about four miles from home. You can also buy directly from Creality, or choose some sellers on Amazon like SainsmartContinue reading

Experimenting with Intel Optane at home with the Intel NUC 7th Generation PC

Welcome back to rsts11 for the summer. We’ve got a lot to cover in the next few weeks.

I haven’t really done a build report in a while, so when I realized I was getting double-dinged for high power usage, I started looking around for ways to save power. One was my desktop PC, which while very nice (with 8 dimm slots and lots of features I don’t use), is using around 250-300W for a 3rd gen core i7 processor.

I decided, based on availability and curiosity, to build out a 7th gen Intel NUC (Next Unit of Computing) PC, which conveniently supports Intel Optane memory. You can read a lot about the Optane technology, but in this application it’s a turbo-charged cache for internal storage. The newer NUCs support it in place of a more conventional m.2/NVMe SSD (used alongside a 2.5″ SSD or HDD), and of course you can use it as an overpriced SSD if you don’t want to use the Optane software.

See my earlier post about an Intel NUC for use with VMware. That NUC is currently running Ubuntu and Splunk for training in the home lab.

I’ll take you through the build manifest and process, and then we’ll look at benchmarks for five configuration permutations.

Build manifest and current prices (July 6, 2018)

  • Intel NUC (NUC7i7BNH) tall mini PC, $450 at Amazon
  • (Optional: NUC kit with preinstalled 16GB Optane module, $489 at Amazon)
  • Intel Optane Memory flash module (16GB $34 – $39 at Amazon, 32GB $58 for Prime members or $72 otherwise at Amazon)
  • Crucial CT2K16G4SFD824A 32GB DDR4 memory kit is currently $310 (it was $172 when I bought it a year and a half ago, ouch).
  • HGST Travelstar 7K1000 1TB 7200rpm SATA drive is $57.
  • Seagate FireCuda 2TB SSHD is $92, with the 1TB version available for $60.
  • Keyboard, mouse, USB flash drive for Windows install, and living room television with HDMI were already in house, but if you’ve read this far, you probably have them and/or know how to choose them. After installation you can use a Logitech Unifying device or a Bluetooth device, but for installation I’d suggest a USB cabled device.
  • Windows 10 Professional can be had for $150 give or take. The actual software can be downloaded from Microsoft but you will need a license key if building a new system without entitlement.

You’re looking at about $1,000 for the full system at today’s prices. If you don’t need 32GB of RAM, stepping down to 16GB should save you at least $100. Continue reading

NUC NUC (Who’s there?) VMware lab…

nuc-outside

VMware lab who? VMware lab in pocket-size format!

So in our last installment, I found out that I can upgrade my Shuttle SH67H ESXi servers to support Ivy Bridge processors. If you want to read more about that, feel free to visit my Compact VMware Server At Home post from Winter 2012, and my Upgrading my home VMware lab with Ivy Bridge post from Spring 2013.

The replacement boards came in from Shuttle, and they’ll be going back into the chassis. But as you may have seen at the end of the last post, I discovered the Intel Next Unit of Computing server line. The NUC line current includes three models.

  • DC3217IYE – i3-3217U processor at 1.8 GHZ dual core with 3MB cache), dual HDMI, Gigabit Ethernet at $293 (pictured)
  • DC3217BY – i3-3217U processor, single HDMI, single Thunderbolt,  – no native Ethernet – at $323
  • DCCP847DYE– Celeron 847 (1.1 GHZ dual core with 2MB L3 cache, dual HDMI, Gigabit Ethernet at $172
    (Prices are estimated list from Intel’s site–probably cheaper by a buck or ten at Amazon, Fry’s, Central Computer, or your favorite retailer. Feel free to use my links and help me buy the next one. 🙂 )

nuc-inside

All three have three USB 2.0 ports outside (one front, two rear), as well as two USB headers inside, conceivably useful for a USB flash drive or flash reader. They also have an mSATA-capable Mini-PCIe slot as well as a short mini-PCIe slot suitable for a WiFi/Bluetooth card. And there are two DDR3 SODIMM slots, supporting a reported 16GB of RAM (the processor supports 32GB, but the board/kit do not mention this). They all include VT-x with EPT.

I don’t see the Celeron being that useful for virtualization labs, but these are rather multifunctional for a little 4″ square computer. Imagine putting a broadband modem (3G/4G/Wimax) inside for reasonably potent portable kiosk purposes (VESA mount kit included). A card reader and a DVD burner for saving and sharing (and even editing) photos. Intel’s WiDi wireless display technology is supported as well, if you have a suitable receiver. Or use it with a portable projector for presentations on the go (no more fiddling with display adapters for presentations at your meetings!).

But we’re talking about a VMware lab here.

And let me get this out of the way… this was one of the coolest features of the NUC.


That’s correct, the box has its own sound effects.

Let’s get this party started…

Those of you of my era and inclinations may remember when KITT’s brain was removed and placed in a non-vehicle form factor on the original Knight Rider tv series. When I got ready to RMA my Shuttle motherboards, I was thinking about this sort of effect for a couple of VMs on the in-service ESXi server that was about to have its board sent to southern California. And that’s what I had to do. I couldn’t quite miniaturize the server Orac-style, but  that thought had crossed my mind as well.

So I picked up the DC327IYE unit at Fry’s, got an mSATA 64GB card (Crucial m4 64GB CT064M4SSD3) and a spare low profile USB flash drive (Patriot Autobahn 8GB (PSF8GLSABUSB)) at Central Computers, and took a Corsair 16GB DDR3 Kit (CMSO16GX3M2A1333C9) from my stock. Assembling it took only a few minutes and a jeweler’s screwdriver, and then I was ready to implement ESXi.

I backed up the VMs from the original system using vSphere Client, so that I could re-deploy them later to the NUC. Someday I’ll get Veeam or something better going to actively back up and replicate my VMs, but for the limited persistent use of my cluster (cacti and mediawiki VMs) this was sufficient.

One gotcha: Fixing the NUC network…

I originally tried reusing the 4GB usb drive my existing server was booting from, but it didn’t recognize the Ethernet interface. I installed a fresh 5.0u2 on a new flash drive, and still no luck. I found a post at tekhead.org that detailed slipstreaming the new driver into ESXi’s install ISO. I did so, installed again, and was up and running.

I did have to create a new datastore on the mSATA card — my original server had used a small Vertex 2 SSD from OCZ, which obviously wouldn’t work here. But I was able to upload my backed up OVF files and bring up the VMs very quickly.

And one warning I’ll bring up is that the unit does get a bit warm, and if you use a metal USB flash drive, it will get hot to the touch. My original ESXi lab box used a plastic-shelled USB drive, and I’m inclined to go back to that.

What’s next, Robert?

My next step is going to be bringing central storage back. There is a new HP MicroServer N54L on the market, but my N40L should be sufficient for now–put the 16GB upgrade in and load it up with drives. As those of you who saw my lab post last year know, it was running FreeNAS 8, but I’m thinking about cutting over to NexentaStor Community Edition.

I’ve taken the original Shuttle box and replaced a mid-tower PC with it for my primary desktop. I will probably set the other one up with a Linux of some sort.

And in a week or so I’ll grab a second NUC and build it out as a second cluster machine for the ESXi lab. All five of them are slated to go into my new EXPEDIT shelving thingie in the home office, and I’ll bring you the latest on these adventures as soon as they happen.