A year ago I built a NAS to reduce my reliance on cloud services, and set up an arr stack. I went with TrueNAS Scale, which was on Bluefin at the time. In the past 12 months, TrueNAS Scale has been through FOUR major OS versions, with a fifth already announced. At least one of those involved a release train switch so, despite diligently checking for updates in the dashboard, I was left in the dust with an obsolete OS, and didn’t find out until it was already a huge hassle to upgrade.
I’ve been really happy with the utility and benefit of having this tool, but holy smokes how is anybody supposed to keep up with all of this? This is far from my only hobby, and I simply do not have the time, patience, or interest for a constant race to keep up with vetting new release versions and fixing what breaks every 3 weeks. I have enough tinkering hobbies as it is.
On top of that, there’s the whole blow up with TrueCharts, which has also left me with an entire suite of obsolete albatrosses around my NAS that I need to deal with. Am I still waiting for them to figure out an upgrade path? I don’t even know anymore.
Sorry for the rant, but I guess what I’m looking for is: how do you keep up with the constant maintenance and updates, and where do I go from here, in February 2025, with a system running Bluefin 22.12, a 32TB ZFS pool (RAIDZ1) that has to remain intact, and a handful of TrueCharts apps that I don’t want to lose the data from (e.g. Jellyfin configs/watch history)?
In life? Amphetamines.
At least you get updates. I’m running TruNAS core which isn’t updated anymore, and I have some jails doing things so I can’t migrate to scale easially.
The good news is this still works despite no updates it does everything it used to. There is almost zero reason to update any working NAS if it is behind a firewall.
The bad news is those jails are doing useful things and because I’m out of date I can’t update what is in them. Some of those services have new versions that add new features that I really really want.
I have ordered (should arrive tomorrow) a N100 which I’m going to manually migrate the useful services to one at a time. Once that is doing I’ll probably switch to XigmaNAS so I can stick with FreeBSD. (I’ve always preferred FreeBSD). That will leave my NAS as just file storage for a while, though depending on how I like XigmaNAS I might or might not run services on that.
Core is still getting updates?i got one last week.
only the most basic security. It is out of date according to the pkg system and so jails cannot be updated-
Super lame. BSD is very preferable for core systems like this.
I know, I like BSD. However because core isn’t a supported version of FreeBSD I cannot update the other things I run on my NAS. I’m more worried about an attack on those out of date services than I am about the few issues that have been fixed
Yes of course. So BSD Truenas is dead? That is a True shame, as BSD is rock steady reliable and runs on truly ancient hardware just fine.
on life support thay haven’t pulled the plug yet but it is coming. They are not updating anything not urgent and so new hardware support is dead as are jails to do useful things. I’m probably moving to xigmanas in the near future.
Thst seems like a good option. Ive got some test beds to try it out on
The good news is this still works despite no updates it does everything it used to. There is almost zero reason to update any working NAS if it is behind a firewall.
if all users and devices on the network are well behaved and don’t install every random app, even if from the play store, then yeah, it’s less of a risk
In the business world it’s pretty common to do staged or switchover upgrades: test new version in a lab environment, iron out the install/config details. Then upgrade a single production server and do a test with a small group of users. Or, build new servers with the new stuff, have a set of users run on it for a while, in this way you can always just move those users back to a known good server.
How do you do this at home? VMs for lots of stuff, or duplicate hardware for NAS type stuff (I’ve read of running TrueNAS in a VM).
To borrow from the preparedness community: if you have 1 you have none, if you have 2 you have 1. As an example, the business world often runs mission-critical systems in a redundant setup in regionally-different data centers, so a storm won’t take them down. The question is how to reproduce this idea in a home lab environment.
This is not practical for a home setup. Not because it would be expensive for more hardware or whatever, but because as soon as you have multiple systems doing the same thing, their state diverges and for pretty much anything that is popular for selfhosting you cannot merge them again or mirgrate users between them without loosing anything. Distributed databases alone are a huge pita, and maintaining such redundant setups would be a million times more effort than just making sure that you can easily and quickly atomically roll back failed updates
As I said “how to reproduce this in a home setup”.
I’m running multiple machines, paid little for all of them, and they all run at pretty low power. I replicate stuff on a schedule, I and have a cloud backup I verify quarterly.
If OP is thinking about how to ensure uptime (however they define it) and prevent downtime due to upgrades, then looking at how Enterprise does things (the people who use research into this very subject performed by universities and organizations like Microsoft and Google), would be useful.
Nowhere did I tell OP to do things this way, and I’d thank you to not make strawmen of my words.
I run Debian on most of my systems and run all of my services in docker (with rare exceptions for node_exporter or stable core tools). My base systems get automatic security upgrades, and then I’ll manually check in every few weeks whenever I feel like it.
My services in docker are version locked to a specific major version (when there’s a tag available) so I can usually re-pull to get minor version updates freely without breaking issues. My few more finnickey services get manual upgrades from me every 6 months or so only.
I usually stick to an OS version for as long as I can, and to that aim I stick to LTS versions with long support windows.
4 major versions in 12mo is…a lot. Especially if those include breaking changes for you. Yikes
I use Debian stable for my main OS for the stability, security and infrequent updates, and run all of my services in Docker containers to keep everything up to date.
Similar to the others although I have messed with Ubuntu, CentOS, Fedora, and even a few others for like a day or two each.
At the moment I am using Fedora. My drives are raided and my main storage has all the data and the docker config directory’s.
Using docker for everything, watchtower for updates, and pertained to manage the containers with a gui. All the containers are directed to /mnt/drive/allMyData. In there is my data folders. Shows, movies, plex configs for recording over the air, ebooks, documents, etc.
Mainly I set it up this way so I can easily change distros if I wanted to and have all my services back up in an hour or so.
I started a text file that contains the command lines I have used to start all of my docker containers. This way if I need to I reference it and use the exact same commands mapped volumes to the same folders. Now I am back up and running in a few clicks. No need to backup the container if all the data in it is setup in folders in my main data directory.
However I am running a separate hardware raid setup prior to os. This way all my data stays safe as a separate volume.
First off, backups of the configs any user data that you can’t torrent should the inevitable happen.
Then set time aside to do updates, I spend Wednesday evenings updating and improving my setup.
Then find a way to track update announcements, I use both an RSS reader and newrealeases.io to know when something I run gets an update
I have automatic updates on everything. If it breaks, I fix it when I have time. If I don’t, it remains broken.
I could also just not do updates, but I like new features.
I dont :) Mostly.
Honestly I have an auto backup system. And then set it up to auto update periodically. Then use Debian Server as it almost never breaks as a server distro.
Debian, baby.
ngl the newest truenas version is incomprehensible to me. Makes most of the videos on it obsolete, and the docs aren’t much better, all while trying to abstract docker compose in a way that makes it shit itself when you try to use anything not specifically developed to work with TNS’s storage layout.
It’ll probably improve with time but I clearly picked the worst time to pick it up.
I’ve decided either to return to https://dietpi.com/ or try prox mox and pray it’s more stable.
For one I don’t use software that updates constantly. If I had to log in to a container more than once a year to fix something, I’d figure out something else. My NAS is just harddrives on a Debian machine.
Everything I use runs either Debian or is some form of BSD
Same, but openSUSE. Tumbleweed on my desktop and laptop, Leap on my servers.
And yeah, if I need to babysit something, I’ll use an alternative. I’ll upgrade when I’m ready to, which is usually over holidays when I’m bored and looking for a project.
Ansible.
How does that help here?
For automating maintenance and updates? How exactly does it not?
They are complaining because of the number of updates and breaking changes. Ansible just a tool for bulk changes
I use debian, so what’s to keep up with? Apt upgrade is literally everything I need. My home server doesn’t take a lot of my time except when I want to tweak something or introduce something new. I dont really follow all the trendy stuff at all and just have it do what I need.
Gentoo.
Daily automatic updates of the OS.
Services and containers are updated at random when i have time.
Its been many years, I have fun doing it.
Not a chore.