So as I look to build my first dedicated media server, I’m curious about what OS options I have which will check all the boxes. I’m interested in Unraid, and if there’s a Linux distro that works especially well I’d be willing to check that out as well. I just want to make sure that whatever I pick, I can use qbittorrent, Proton, and get the Arr suite working
Like others in here, I also set mine up with Debian and docker compose. Since it’s an always on server I wanted maximum stability. I don’t use unRAID, so not sure about compatibility for that.
Data protection is a big concern. Is that something you have in your setup?
I run nightly archiving backups using Borg Backup.
It’s compression + de-duplication algorithms have me able to store 18 historical backups of about 422gb ea, in only 367gb of disk space.
That then gets mirrored to a cold storage drive manually every few months.
Ooh so I could do this to my media library?
If you want and have somewhere to store it.
I’m not all that concerned about the media drives; I don’t have a spare 30tb to stuff that backup in, and that can be re-acquired if push comes to shove. I tend to just backup metadata + server config/database files along with everything in /home, /root, and /var.
Unfortunately not in my setup, but that’s just because I don’t have the money to upgrade it at the moment and nearly everything I have is stuff I can easily redownload.
Once I can save up for it I will up my storage and get some back ups set up.
I’d assume its probably Linux even if it’s the worst in terms of Proton support but, its not like you need all the bells and whistles.
Yeah I’m not surprised. Weak Proton support sucks, but for a dedicated media server it’s not the priority
Yeah I mean its understandable why Proton does not prioritize Linux but its a bummer.
I’m sure there are better options, but I’m running proxmox as my host and a windows server VM for my suite.
Depends on your experience, hardware, and other stuff.
You could easily use Debian or Ubuntu server and install Docker if all you want is those listed services installed on unRAIDed drives.
You could try something like Dietpi (which is what Ive used since I started self hosting) which simplifies a few things and gives some helpful scripts on top of a basic Debian installation. It’s a simple setup but still just plain ol’ Debian so easy to set up however you like.
You could use something like CasaOS or ZimaOS which offer Web interfaces and integrate with docker for those with a “no tech” background up to technical users.
ProxMox is an option, but takes a lot of learning proxmox-specific stuff and IMO might be a bit overkill for your first server.
Personally, I’d go for something accessible to your tastes because everything nowadays has some kind of “easy setup” path for Plex/Jelly + Arr. Once it’s set up, use it! Then once you need a big change for better hardware or more bespoke software setups then start digging into more fancy setups.
I actually want to prioritise the data protection of some sort of RAID setup, and support for torrenting and whatnot would be secondary to that. Really what I’m trying to avoid is installing and setting up my system only to find out that the OS I’ve picked is terrible for torrenting afterwards.
I have a workable setup on consumer Windows 11 right now, so I see the next step as having a dedicated Media Server box which can give me plenty of storage, data protection (right now a drive failure would wipe out half my server), and room for future expansion. Once that’s sorted, then I’ll look into the Arr suite and more advanced torrenting stuff. I want to pick something good for that stuff now, though, so I don’t have a ton of headache down the road
I think there’s some deffo better OSes than my suggestions for RAID setups and stuff, bar ProxMox. Maybe it is worth you looking into those options!
That being said, any OS can torrent shit just fine. If it can run Docker or other containers (so 99% of suggestions here) you’re set.
Maybe if you can spare the hardware try setting up a RAID on a couple of different ISOs to test em. That’ll be the harder, or more permanent, aspect of the setup I think.
I use Alpine Linux for server-based stuff because it’s so light and the packages are kept up-to-date.
I’m currently playing with setting up a home server on an old PC, using Proxmox as the main OS and using LXC and VMs for the services, not fully set up yet (still working on figuring out reverse proxy to make my services available on the internet)
It’s neat tho, and there’s some helpful scripts for installing various containers and things online.
I would need that because I’m basically starting from zero with learning all this stuff lol. Using Tautulli remotely is a challenge for me right now if that gives any indication of my level of knowledge here
https://community-scripts.github.io/ProxmoxVE/ Check out proxmox, then check out the above scripts and start with the post install script, then something like Ubuntu to get a feel, you can use the other scripts for specific containers
remotely is a challenge for me right now
I’ve seen you mention this a few times and like mentioned elsewhere in here, set yourself a Tailnet up.
It’s fugging brilliant, the docs are wrote by some very clever people (note, I am best described as a copy / pasta person?) and are through, and you can use a github or even a Google account for authentication.
Even grabbing a cheapo raspberry pi4 gives you a 1GB port (the rpi3 only has a 100Mbps rj-45 port and would still suffice for lesser needs) for your own
VPNWireguard to home, that is P2P encrypted and can be used as an Exit Node / subnet routerie: if you’re on someone else’s internet/cellular you can simply hit up your exit node to break out of any nanny filters, stop anyone else noseying at your traffic (obv bar your ISP seeing outgoing requests unless you have a another…VPN on your router), and also view and/or manage any devices on your home network/Tailnet by IP address.
Hell, I dumped a rpi down at a family members house that is part of the “stack” so I can help out remotely but it seems someone has knocked the aerial out of the HAT again :/
Best thing ever.
Mmm good stuff, I’ll have to check out tailscale.
I ended up going with a traefik setup, which works well but more options info is always good.
Using debian 12.
openmediavault + Docker or TrueNAS Scale
So openmediavault running on the server, and then use one of the other two to get PMS, Proton VPN, qbittorrent, etc.?
Openmediavault and TrueNAS are 2 OS alternatives and Docker is the depolyment mechanism to run the services like qbitorrent or ProtonVPN.
Easy, Linux. I prefer Arch based because of AUR.
I wouldn’t use Arch on a Server. Everything you install will probably be in a docker container anyway, so fast updates for system packages isn’t important compared to stability. Good choices would be Debian or Fedora Server. I personally use Fedora but the reason is just that I use Fedora on Desktop too, so I know they have really good defaults (They’re really fast in adopting new stuff like Wayland, Pipewire, BTRFS with encryption and so on) and it’s nice that Cockpit us preinstalled, so I can do a lot of stuff using a WebUI. Debian is probably more stable tho, with Fedora there is a chance that something could break (even though it’s still pretty small) but Devian really just works always. The downside is of course very outdated packages but, as I said, on a Server that doesn’t matter because Docker containers update independetly from the system.
Nah me neither, I had my desktop mindset going there. I use truenas scale, couldn’t be happier.
I assume any Linux or *BSD distro will work, especially one with Docker (which is most/all of them?) so you don’t have to worry about things being packaged for your distro so long as there’s a docker image. My server is Alpine Linux.
Debian!
Always Debian.
I have been fighting with Docker and Fedora on these exact items all weekend. Good luck
I dunno what the best is, but if you choose nixos configure openvpn instead of trying to use the protonvpn package.
Just wanted to add that Wireguard is better than OpenVPN in every way and you should use that except when you want to use it for torrenting. I don’t know remember the reason but that’s the one time when you should be using OpenVPN. I think it had something to do with OpenVPN supporting TCP and Wireguard being UDP only or something like that.
interesting. proton has example openvpn configs on their site which was hugely helpful to me. dunno if they have wireguard equivalents, or if those are needed.
I’d be weird if they didn’t have Wireguard configs, Wireguard is basically the standard nowadays. It’s faster and safer (the code base is way smaller, so the chance of there being security vulnerabilities is a lot lower and can be fixed more easily).
Looks like they do have both openvpn and wireguard configs. Is it true that for torrenting openvpn is preferred? That’s basically the only reason I use vpn.
I think so. The main reason I use OpenVPN for that is just that that’s what Gluetun uses. You should search that up online tho, I don’t really remember why OpenVPN is better.
Wireguard uses UDP which results in better latency and power usage (e.g. mobile). This does not mean Wireguard can’t tunnel TCP packets, just like OpenVPN also supports tunneling UDP.
I’m using Wireguard succesfully for torrenting.
As a note: while UDP is preferable for stability/power usage, UDP VPN traffic is often blocked by corporate firewalls (work, public free wifi, etc) and won’t connect at all. I run OpenVPN using TCP on a standard port like 80/443/22/etc to get through this, disguised as any other TLS connection.
Good point. Setting up shadowsocks and tunneling wireguard through is on my to-do list. I believe ss also works over TCP so it should work reliably in filtered networks.
why? protovpn package has been working fine for me on nixos
edit: nevermind, in a server environment you should configure openvpn (i just use protonvpn on my desktop)
I was maybe doing it wrong, but it never worked for me while openvpn did. Glad it works for someone!
I’m sure any server oriented Linux distro will do fine. I use Debian.
I will note, I don’t know if you’re planning on having remote access (e.g. through tailscale or reverse proxy), but if you are, I found it quite a challenge to get proton to play nice with them
What did you end up using instead? It’s not a necessity, but remote monitoring and access has come in very handy in the past
For a while I split tunneled tailscale through an openvpn .conf file, but recently switched to using qbittorrent in docker with gluetun. Qbittorrent is realistically the only service that needs to be behind a vpn so it works out well
For newcomers I’d recommend docker and images like gluetun for setting up the VPN. It makes it easy to forward ports (for remote access) while keeping the torrent client behind the VPN.
I would also recommend it, and I even tried it when i started, but i just couldn’t get it to work. Probably permission issues
Now that Truenas Scale supports just plain Docker (and it’s running on Debian) I think it’s a great option for an all-in-one media box. I’ve had my complaints with Truenas over the years, but it’s done a really great job at preventing me from shooting myself in the foot when it comes to my data.
I believe raidz expansion is also now in stable (though still better to do a bit of planning for your pool before pulling the trigger).
The raidz stuff, as I understand it, seems pretty compelling. A setup where I can lose any given drive and replace it with no data loss would be very ideal. So I would just run TrueNAS scale, through which would manage my drives, and then install everything else in docker containers or something?
Yes, what you’re saying is the idea, and why I went with this setup.
I am running raidz2 on all my arrays, so I can pull any 2 disks from an array and my data is still there.
Currently I have 3 arrays of 8 disks each, organized into a single pool.
You can set similar up with any raid system, but so far Truenas has been rock solid and intuitive to me. My gripes are mostly around the (long) journey to “just Docker” for services. The parts of the UI / system that deals with storage seems to have a high focus on reliability / durability.
Latest version of Truenas supports Docker as “apps” where you can input all config through the UI. I prefer editing the config as yaml, so the only “app” I installed is Dockge. It lets me add Docker compose stacks, so I edit the compose files and run everything through Dockge. Useful as most arrs have example Docker compose files.
For hardware I went with just an off-the-shelf desktop motherboard, and a case with 8 hot swap bays. I also have an HBA expansion card connected via PCI, with two additional 8 bay enclosures on the backplane. You can start with what you need now (just the single case/drive bays), and expand later (raidz expansion makes this easier, since it’s now possible to add disks to an existing array).
If I was going to start over, I might consider a proper rack with a disk tray enclosure.
You do want a good amount of RAM for zfs.
For boot, I recommend a mirror at least two of the cheapest SSD you can find each in an enclosure connected via USB. Boot doesn’t need to be that fast. Do not use thumb drives unless you’re fine with replacing them every few months.
For docker services, I recommend a mirror of two reasonable size SSDs. Jellyfin/Plex in particular benefit from an SSD for loading metadata. And back up the entire services partition (dataset) to your pool regularly. If you don’t splurge for a mirror, at least do the backups. (Can you tell who previously had the single SSD running all of his services fail on him?)
For torrents I am considering a cache SSD that will simply exist for incoming, incomplete torrents. They will get moved to the pool upon completion. This reduces fragmentation in the pool, since ZFS cannot defragment. Currently I’m using the services mirror SSDs for that purpose. This is really a long-term concern. I’ve run my pool for almost 10 years now, and most of the time wrote incomplete torrents directly to the pool. Performance still seems fine.