I got a Synology DS1821+ array about two years ago, planning to finally cascade my other Synology units and let one or two go. So far, that has not happened, but I’m getting closer.
Synology DS1821+, photo courtesy of Synology. Mine looks like this but with a bit more dust.
The back story of my DS1821+
This is the 8-bay model with a Ryzen V1500B 4-core 2.2GHz processor, support for virtual machines and containers, and a PCIe slot which I filled with a dual port Mellanox ConnectX-3 (CX312A) 10 Gigabit SFP+ card which came in under $100 on eBay. The expansion options include two eSATA ports (usually used for the overly expensive DX expansion units) and four USB 3 ports (one of which now has a 5-bay Terramaster JBOD attached).
Today I could get a 40 Gigabit card for that price. In fact, I did for another project, for about $90+tax with two Mellanox cables, just last month, but I’m not sure it would work on. It’s not too hard to find one of these 10 Gigabit cards for under $50 shipped in the US. Be sure to get the dual plate or low profile version for this Synology array.
I ran it for a while with 64GB RAM (not “supported” but it works), and then swapped that out to upgrade my XPS 15 9570 laptop, putting that machine’s 32GB back into the Synology. I had a couple of 16TB MDD white label hard drives and a 256GB LiteON SSD as a cache. I know, I know, there’s NVME cache in the bottom and you can even use it as a filesystem volume now.
Here’s where something went wrong
Sometime in the past couple of updates, the SSD cache started warning that it was missing but still accessible. I ignored it, since this system doesn’t see a lot of use and I don’t really care about the cache.
Volume expansion attempt, which failed. SSD cache warning showing here as well.
Earlier this month, I got a couple more of the MDD white label drives (actually ST16000NM001G-2KK103 according to Synology Storage Manager), I was able to expand the storage pool but not the volume.
Successful storage pool expansionThe volume expansion error. No filesystem errors were discovered.
“The system could not expand Volume 1. This may be caused by file system errors. To rescue your data, please sign in to your Synology Account to submit a technical support ticket.”
Well, as I went to the Synology website to enter a ticket, I remembered the SSD issue and wondered if that caused the problem with growing the volume.
Easier to fix than I had feared
Sure enough, removing the cache made the volume expand normally, bringing me from 93% used to 45% used. Not too bad.
Where do we go from here?
At some point in the next month or two, I plan to get three more of these 16TB drives, pull the unused 8TB and unused 256GB SSD, and get the system maxed out.
I’m a bit torn between using this array to replace my Chia farms, at least for a while, or merge my other two substantial Synology arrays onto it and then use one of them (maybe the DS1515+) as the Chia farm with the DX513 and an assortment of external USB drives. Flexfarmer on Docker makes it pretty easy to run Chia farming on a Synology with minimal maintenance.
2022-10-07: Updated with dual Xeon v4 bladebit. 2023-02-11: Updated with a madmax cuda plotter invalid ndevices error 2023-02-20: Updated with madmax cuda plotting with 128GB RAM
The backlog is getting more interesting, but in an attempt to compare a Xeon Silver processor to one or two E5-2620v4 processors for some future Chia plotting, I’ve arrived at some benchmarks and a bladebit caveat for the new diskplotter.
The idea is to replace my OG plots with NFT-style plots, while still self-pooling them. At some point I will probably expand my storage again as well.
Links are to original manufacturer specifications. If you find this document useful, feel free to send me a coffee. It might help with the memory upgrades on one or both machines too.
Ubuntu 22.04-1 LTS with current updates as of October 1, 2022
Quick observation: On my Monoprice Stitch power meter, this system goes from about 60W at idle to 160W while plotting with Madmax or Bladebit. Not surprising, but noisy and blowy.
Ubuntu 22.04-1 LTS with current updates as of October 1, 2022
Quick observation: This storage is very suboptimal for plotting, but it’s what came with the systems. I will dig into whether I have a larger faster SSD. Unfortunately this system only has USB 2.0 externally, and one low profile PCIe slot, so I’m a bit limited. Might put a 1TB NVMe drive in the PCIe slot though and see how that goes.
System three (I’ve written about this one before):
Dell Precision Workstation T7910
Dual Xeon E5-2650Lv4 (each 14c28t)
128GB RAM
4x 1TB Samsung NVMe drives on the Ultra Quad (PCIe 3.0 x4 per drive) in software RAID-0
Ubuntu 22.04.1 LTS with current updates as of February 2023
All plotters left at default settings unless otherwise noted.
Metrics so far:
System one, Chiapos with 12200MB memory assigned
Time for phase 1 = 10876.922 seconds. CPU (147.640%) Sun Oct 2 19:31:42 2022 Time for phase 2 = 4247.395 seconds. CPU (97.160%) Sun Oct 2 20:42:29 2022 Time for phase 3 = 9153.365 seconds. CPU (95.640%) Sun Oct 2 23:15:03 2022 Time for phase 4 = 635.266 seconds. CPU (97.980%) Sun Oct 2 23:25:38 2022
Total time = 24912.949 seconds. CPU (118.660%) Sun Oct 2 23:25:38 2022
System one, Madmax with -r 10
Phase 1 took 1461.93 sec Phase 2 took 773.745 sec Phase 3 took 1241.66 sec, wrote 21866600944 entries to final plot Phase 4 took 61.6523 sec, final plot size is 108771592628 bytes Total plot creation time was 3539.07 sec (58.9845 min)
System one, Bladebit with 16GB cache configured
Bladebit plot with 16G cache Finished Phase 1 in 1744.37 seconds ( 29.1 minutes ). Finished Phase 2 in 174.39 seconds ( 2.9 minutes ). Finished Phase 3 in 1501.98 seconds ( 25.0 minutes ). Finished plotting in 3420.74 seconds ( 57.0 minutes ).
System two with SN_750 NVMe drive (500GB), Bladebit with 24G cache
Finished Phase 1 in 1376.37 seconds ( 22.9 minutes ). Finished Phase 2 in 148.09 seconds ( 2.5 minutes ). Finished Phase 3 in 970.59 seconds ( 16.2 minutes ). Finished plotting in 2495.06 seconds ( 41.6 minutes ).
Total plot creation time was 380.192 sec (6.33654 min) Total plot creation time was 336.725 sec (5.61209 min) Total plot creation time was 355.188 sec (5.9198 min) Total plot creation time was 374.554 sec (6.24257 min) Total plot creation time was 388.424 sec (6.47374 min)
The bladebit diskplot quirk:
If you get this error, there’s a good chance you didn’t specify the destination for the plot.
Allocating memory terminate called after throwing an instance of 'std::logic_error' what(): basic_string::Mconstruct null not valid Aborted (core dumped)
would give this error. Unlike the other plotters, it does *not* assume that your temp path is your output path if you only specify the temp path. So you’d use:
With GPU-enhanced plotting now available in released (binary-only) code from Madmax, I decided to throw a modern GPU into my T7910, repair the post-22.04-upgrade mount failures, and give it a try.
As a reminder from previous posts, this is a dual E5-2650L v4 system with 128GB RAM and 4x 1TB NVMe on the Ultra Quad card. It boots from a 256GB NVMe drive on a PCIe card, and has 4x 8TB SAS drives that don’t seem to be recognized after a few months off. Probably a SATA controller or cable issue, but life goes on.
So I put one of my RTX 3060 LHR cards in, fixed up the NVMe stuff a bit, and went to run cuda_plotter_k32. It should do the partial memory plot, but alas, I got an error:
Invalid -r | --ndevices, not enough devices: 0
The card showed up in lspci, but then I realized it needed NVIDIA drivers. So I installed the 530 server bundle and tools, and then the plotter worked.
Alas, the first GPU enhanced plot seems to have wedged the machine against interactive use. Looks like that’s a memory issue that I’ll have to work out, probably by adding memory.
I will update this with further stats, and maybe make a comparison chart, as testing progresses. I’m also giving serious thought to upgrading the SSD in the dual-E5 machine.
Obligatory disclosure:
While I work for Supermicro at the time of this writing, the servers and all other elements of my home labs and workspaces are my own and have no association with my employer. This post is my own, and my employer probably doesn’t even remember I have a blog, much less approve of it.
This is another piece on a part of the Chia and cryptocurrency landscapes. See previous posts at https://rsts11.com/crypto
Need to set up a lightweight VPN to get into your low profile node remotely? Check out Stephen Foskett’s writeup on Zerotier. I’m using it on my Pi nodes to reduce NAT layers.
Many if not most Chia farmers run a full node on their farming / plotting machine. Some larger farms will use the remote harvester model, with a single full node and several machines farming plots on local storage.
If you’re using Flexfarmer from Flexpool, or just want a supplemental node (maybe to speed up your own resyncing, or to supplement decentralization on the Chia network), you might want a dedicated node that doesn’t farm or plot. And for that use case, you don’t really need dual EPYC or AMD Threadripper machines.
In fact, a well-planned Raspberry Pi 4B 4GB or 8GB system, with an external USB drive, will do quite well for this use. If you want to do a few forks as well, or another blockchain full node, a moderately-recent Intel NUC would do quite well for not much more.
So here we’ll look at three builds to get you going. Note that any of these can run a full node plus Flexfarmer if you want, or just a full node.
If you don’t already have Chia software and a full node installed, go ahead and install and sync the node on a full scale PC. it may save you five days of waiting. My original build for this use case was to test the blockchain syncing time from scratch.
Syncing from a semi-optimal Pi 4B from scratch took about 8 days, for what it’s worth. One member of the Chia public Keybase forum reported about 28 hours to sync on an Intel Core i5 12600k.
Caveat: Raspberry Pi boards are a bit more challenging to find and even harder to find anywhere near the frequently-touted $35 price point, or even under $150. And for Chia nodes, you want a minimum of the 4GB Pi 4B (8GB wouldn’t hurt). So while it’s possible to run on older hardware, it’s not recommended.
You might also be able to run on a Pi400 (the Raspberry Pi 4B in a keyboard case, which is much easier to find for $100 or so, complete). I plan to test this soon.
Raspberry Pi with external USB SSD.
This was my initial build, and today it’s running at the Andromedary Instinct providing an accessible full node for about 10-15 watts maximum.
The Evergreen site and product line have evolved since this post was made in late 2021. I’m planning to update the coverage soon, but don’t be surprised if product names and prices have changed since then.
If you’ve bought your Evergreen Miner, you may have questions answered at my unofficial FAQ.
In the mean time, I have (as of January 2023) joined the Evergreen Systems Co. affiliate program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to the partner site. If you’d like to buy some of their gear, use the link https://evergreenminer.com/?ref=g2vkXM2BkDi2m, or use the referral code RSTS11 for a $10 discount, and I may receive a commission.
A few years ago, a turnkey desktop container/VM platform from Antsle came along, and I thought “this is cool, but I bet I could make one myself.” You can read about that here on rsts11.
Earlier this month I saw a low power Pi-based project similar to the Antsle Nano (which I did build on my own) come out for Chia farming. The project, Evergreen Miner (evergreenminer.com), is the brainchild of a young geek named Dylan Rose who’s worked with Amazon and other companies and has begun an interesting forward-looking Chia project to really bring Chia farming to the masses.
I’ve written about building your own Chia system, and lots of people (tens of thousands at least) have done so. But some people aren’t up for the space, expense, time, tuning, software building, and so forth to make a node and farm.
However, a lot of people could benefit from the technology and platform and even more into the future as the ecosystem matures. So the idea of a turnkey platform that’s relatively easy to build and maintain and expand, even without plotting on your own, sounds pretty good.
Think all of the functionality and potential of Chia, with the ease of setup and management of a typical mobile app, and of course the power draw of an LED light bulb or two. No hardware or Linux or filesystem or SAS knowledge required.
As you saw in my 3D Printing series, after years of pondering a 3D printer, I was finally inspired to buy one when a pile of clusters came up on eBay from the defunct rabb.it video streaming service. In this series, I’ll take you through turning a rabbit door into some useful computing resources.
You can do something similar even after the clusters are sold out; a lot of people have probably bought the clusters and ended up not using them, so you’ll see boards on eBay or local marketplaces… or you can adjust the plans here to other models.
Update December 2021: This post languished in the drafts folder for about a year. I’ve updated links, and I’ll be reporting on some changes since the October 2020 launch of this cluster soon.
Let’s NUC this cluster out
Install memory, SATA cable, and SSD
Upgrade BIOS and set some annoying settings
Install your operating system
Set up central control
Install memory, SATA cable, and SSD
This is the least interesting part of the process, but you’ll need to do it before you can install an OS.
Start by loading the SODIMM of your choice onto the board. If you’re using an SSD like I am, you’ll connect the SATA cable to the black SATA connector next to the front USB stack, and the power cable to the beige connector perpendicular to the SATA connector. If you’re using a standard SD card, plug it into the SD slot to the right (as shown with the MAC address label). If you’re going with netboot (local storage? where we’re going we don’t need local storage!), just connect your network cable.
Upgrade BIOS and set some annoying settings
I created a bootable FreeDOS USB drive with Rufus, a common free software product used to create bootable USB media from ISOs (think Linux, Windows, etc). From there, we get the latest BIOS from Intel’s Download Center and place the file on the bootable drive. (Further BIOS instructions available on Intel Support.)
Update: There is a September 2022 BIOS update (0081) I just discovered in December 2022. I had originally installed BIOS update 0079 released April 20, 2020. You’ll need to search for NUC5PPYH even though the board’s model is PPYB.
Connect a monitor and keyboard, plug in the bootable drive, and apply power (or just reset the board). Use the F7 key to go into onboard flash update and load the BIOS file from the flash drive, or choose to boot from the flash drive and use the DOS-style flasher from there.
When you’ve done the upgrade and the system has rebooted, go into the BIOS with the F2 key and choose BIOS default values. Then go into the menus to enable all the USB ports (for some reason the default is to enable ports 1-3, leaving physical port 4 and header ports 5-6 disabled) as well as the SATA port if you are using that for storage. I’d also check the boot order (move net boot down in preference or disable outright if you don’t plan to use it). You can choose other settings as desired, and then press F10 to save and reboot.
Install your operating system
The easiest way to roll out the NUC side of the door would be to netboot an installation infrastructure like Cobbler. One of the first things I did when I went to work for the Mouse 10 years ago was setting up Cobbler for a deployment of RHEL 5.5.
Sure enough, Cobbler is still a thing, with very recent updates. I was able to get partway there this time and then, after several dozen runs to the garage and back to power cycle nodes, I gave up and installed from local media.
For CentOS 8, I did a manual install booting from a Rufus-created USB drive, with the SSD installed. I configured my storage and network options by hand, as well as user and root credentials. This left an “anaconda.ks” kickstart file on the installed system, which I copied to a second flash drive.
For the additional systems, I plugged both the CentOS 8 installer and the drive with the kickstart file into the NUCs and booted from USB. I ran into some strange storage issues with the drive not being blank, despite having chosen the kickstart option. Ideally, you would boot from the USB installer, it would find your kickstart config, and just roll out the software without intervention from there.
After that, if your DHCP server doesn’t assign hostnames you like, you can go in and set hostnames with hostnamectl or the like.
Set up central control
If you use a configuration management platform like Ansible, Puppet, Chef, cfengine, or the like, you’ll want to set those up at this point.
I’ve gone with the lightweight method so far, with shared SSH keys from a management host (an Intel NUC with CentOS on it, originally intended to be the cobbler server).
Use ssh-keygen to create your key files, and then ssh-copy-id can be used to push out the keys to your hosts. Then look into a more manageable option.
Where do we go from here?
As I finish this post in December 2021, a year after the original build, I’m looking at going back and making a few changes to the cluster to bring it up as a Kubernetes platform.
With the demise of CentOS as many of us know it, I’m planning to replace the installed OSes with Ubuntu LTS. I’m planning to test out some cryptocurrency cpu-based mining, and run Kubernetes platform(s) on it as well, and bring my second door up to speed (the RAM has been sitting in a box in the living room for a year now).
There’s a chance I’ll even do some lightweight Chia farming, using either bus-powered USB hard drives or some of the extra power connectors from the fused expanders for standard Seagate externals.
For those of you who have bought and built up these doors, what did you do with them? Feel free to share details and blog post links in the comments. I’ll put interesting ones into the body of this post as I see them.
Just one more thing
One much later update – if you’re looking for one of the posts that inspired this 3d printed build, it’s “What in the NUC have I done?” from Reddit a few years back. The 20mm/25mm spacers are the key detail, and I’ve found it hard to find this post again looking for mixes of m2.5 20mm 25mm 30mm nuc nuc5 nuc5ppyh nuc5ppyb rabb.it spacer standoff etc. Now when I come back to look for it again in a year or two, I’ll hopefully find it here where I left it.
And yes, that means I have another project in mind. I think I know where my 8GB dimms for the second set of NUCs is, and I’m ready to give up on the TK1 boards, so that means 15 NUCs in one stack. Stay tuned, and be patient. Or don’t.