In a few months, I will have the space and infrastructure to join the selfhost community. I’m trying to prepare, as I know it can be challenging, but I somehow ended up with more questions than answers.

For context, I want to run a server with torrents, media (plex, Jellyfin or something else entirely - I didn’t make a decision yet), photos(Emmich, if its stable, or something else), Rook, Paperless, Home Assistant, Frigate, Adguard Home… Possibly lots more. Also, I will need storage - I’m planning for 3x18tb drives to begin with, but will certainly be adding more later.

My initial intention was to set up a NAS in Silverstone CS382(or Jonsbo N3/N5, if they’re in a reasonable price). I heard good things about Unraid and it’s capabilities of running docker. On the other hand, I’m hearing hood things about Proxmox or NixOS with NAS software running in a VM, too - but for Unraid, it seems hacky. Maybe I should run NAS and a separate server? That’d be more costly and seems like more work on maintenance with no real benefit. Maybe I should go with TrueNAS in a VM? If I don’t do anything other than NAS, TrueNAS shouldn’t be that hard to set up, right?

I’m also wondering whether I should go with Intel for QuickSync, AMD and Arc graphics or something else entirely. I’ve read that AV1 is getting popular, is AMD getting more support there? I will buy Intel if it’s clearly the better option, but I’m team Red and would prefer AMD.

Also, could anyone with a non-technical SO tell me how do they find your selhosted things? I’ve read about Cloudflare Tunnels and Tailscale, which will be a breeze for me, but I gotta think about other users aswell.

That’s another concern for me - am I correct in thinking Tailscale and Cloudflare Tunnels are all I need to access the server remotely? I will probably set up a PiKVM or the Risc one aswell, can it be exposed aswell? I will have a dream machine from Ubiqiti, anything that needs to run to access the server I may run there. I’m not looking to set up anything more complicated like Wireguard - it’s too much.

For additional context, I’m a software developer, I know my way with Docker and the command line and I consider myself to be tech savvy, but I’m not looking to spend every weekend reading changelogs and doing manual updates. I want to have an upgrade path (that’s why Im not going with Synology for example), but I also don’t want to obsess over it. Money isn’t much of an issue, I can spare 1-2k$ on the build, not including the drives.

Any feedback and suggestions appreciated :)

  • mspencer712@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    Married, we both work from home, and we’re in an apartment.

    First, all of my weird stuff is not between her work and living room pcs and the internet. Cable modem connects to normal consumer router (openwrt) with four lan ports. Two of those are directly connected to her machines (requiring a 150-ish foot cable for one), and two connect to my stuff. All of my stuff can be down and she still has internet.

    Second, no rack mount servers with loud fans, mid tower cases only. Through command line tools I’ve found some of these are in fact capable of a lot of fan noise, but this never happens in normal operation so she’s fine with it.

    Separately I’d say, have a plan for what she will need if something happens to you. Precious memories, backups, your utility and service accounts, etc. should remain accessible to her if you’re gone and everything is powered off - but not accessible to a burglar. Ideally label and structure things so a future internet installer can ignore your stuff and set her up with normal consumer internet after your business internet account is shut off.

    Also keep in mind if you both switch over so every movie and show you watch only ever comes from Plex (which we both like), in an extended power outage situation all of your media will be inaccessible. It might be good to save a few emergency-entertainment shows to storage you can browse from your phone, usb or iXpand drive you can plug directly into your phone for example.

  • linearchaos@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    I’m running something surprisingly close to most of what you’re asking for sans the immich which I’m waiting on stability from them first. That warning at the time of their site that says it’s under constant development and not to use it as your primary picture store is a bit worrisome.

    Unraid with 2 video cards

    • Plex Container (primary video card)
    • Plex VM (pass through secondary card handles DVR and backups and it’s also my steam remote provider)
    • Home assistant VM (running it in a VM is nicer than a container because of HAOS)
    • Jellyfin container
    • All the video services pull from the same catalog. I use jellyfin frequently but secondarily, it is my backup in case Plex heads in a direction I don’t like. They’ve already shown some indications I’m not going to like them in the future.
    • Deluge+VPN container
    • Cloudflare container (first set up is actually a pain in the ass)
    • Tailscale plugin
    • SearxNG container self-hosted search engine tool
    • Pi hole in a container
    • Pi hole on a raspberry pi

    Plex gets accessed remotely via its own remote capabilities

    Jellyfin gets accessed remotely via tailscale

    SearXNG is access remotely via cloudflare

    I have a secondary Plex server sitting on a raspberry pi with the backup pi hole

    I am preparing to set up a peertube. Haven’t had a lot of luck with the container on unraid. I run a fair amount of proxmox at work so I’ll probably just use proxmox for it.

    I run a separate dedicated system completely for my cameras. Not running frigate yet but I’ll get around to it eventually using blue iris at the moment.

    My unraid gets as much uptime as updates allow. I love being able to just jbod my media discs together and still have some protection with parity.

    I find the containerized version of Plex to be more stable than my VM version but that’s probably my own fault as I’m oversubscribing the vm.

  • Carlos Francisco 🦣@lile.cl
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    2 months ago

    I’m a very satisfied proxmox user and I have almost all applications deployed with VM or containers. If you’re no a begginer into Linux/nas I think it is the best choice. On the other hand I would totally discard TrueNAS because it is too restrictive and hard to customize.

    @sodamnfrolic @selfhosted

  • zdanger@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    2 months ago

    I’ve been running Unraid for over 5 years now and it has been great. I just checked the uptime on it and it’s been running for 146 days, 11 hours, 31 minutes. I should probably check for updates…

    I used to run a Threadripper 2950x with 64GB of RAM as my main system and built a new PC when Ryzen 5000 came out and the Threadripper system became the Unraid server. I threw it into a 24-bay, 4U Supermicro CSE-846 with a LSI SAS9211-8I HBA and an extra RTX 3060 I had for hardware transcoding Plex. I have 64TB of storage at the moment with no drive being larger than 8TB. Having 24 bays is nice for that. The server is in a rack in my cellar so sound isn’t an issue for me. I’ve thought about switching to an Epyc setup just to have IPMI built into the motherboard instead of buying a separate KVM device

    I have 8 containers and 5 VMs running. I have multiple VLANs setup on my UDM Pro and Juniper switches. A camera VLAN, one for IoT devices for Home Assistant. A Tailscale exit node container is all I use to access the server remotely. I also have a Rustdesk server VM setup for private remote desktop

  • iAmTheTot@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    My answer may not be quite as helpful for you, as you said you are a software dev and would probably pick up the more advanced options easier than me.

    But for me, I was asking extremely similar questions to you a few months ago (still my only post on Lemmy, lol). I ended up trying unraid, proxmox, and truenas.

    I went with unraid and have no regrets. It’s been super easy and I now have the all in one server box I’ve wanted for years.

    • sodamnfrolic@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      One of the things I learned as a dev is not to overcomplicate things - my profession is very guilty of this and it bites us in the ass in the end 100% of the time. I’m slowly learning not to do that, thus, Unraid :) thanks for the info.

      Do you have any issues with downtime? Are updates troublesome?

      • iAmTheTot@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I haven’t been running it for too long, as I just started putting this together a few months ago and took a while to decide. No issues with downtime though. Updates have been super easy with Unraid so far.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    2 months ago

    Unraid is bad at NAS and bad at docker. Go with a separate Nas and application server.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    I have one mini-ATX server with four drives in RAID 10. I find it easier to manage everything in one device. It runs Proxmox, with Almalinux in a VM that runs my Docker containers. Yes, it’s a layer of inefficiency, but I keep it that way partially because I migrated the VM to Proxmox from ESXi, and partially because I’m not confident in LXC being able to do everything Docker can.

    I also run it that way because I have a handful of other VMs.

  • ѕєχυαℓ ρσℓутσρє@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    What I’ve realized in my (very limited) experience in selfhosting, it’s always best to use a general purpose server OS rather than anything geared to a specific usecase, unless that’s the only thing you’re gonna use it for. So, if you want a separate NAS drive, then it’s a good idea to use TrueNAS on it. But on your main server, it’ll be best to use some sort of RHEL downstream distro like AlmaLinux.

  • pwet@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    After years of messing around with cheap and unreliable hardware and complicated setups, I settled to a very stable and simple setup: one huge Dell server with a lot of spare SAS bays and plenty empty memory slots, driven by Proxmox. Within it only 4 VMs: One for pfSense, one for Home Assistant, one for Docker, and one for Ispconfig, as I host for some friends. I ended up using Truenas as it was such a pain to maintain and totally useless for my use case. Proxmox is good enough to run a simple ZFS Nas if you don’t need to manage dozens of shares and users. It’s now so hassle free that I start to become inclined to brake something just for the sake of it.

    • ripcord@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 months ago

      Which model Dell?

      Buying few-year old enterprise gear can be a really cost-effective way to get a ton of power and expandability. But the noise, footprint, and power requirements seem pretty niche, even for homelab/selfhost people.

      But I’m curious if you’re talking about a full-depth rack system like I’m assuming, or something else.

      Personally, I switched to a handful of very small-footprint systems (mostly NUC/SFF PCs, and some laptops). And use cheap jbod enclosures when I need to add external storage.

      • pwet@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        It’s a R730XD. It draws 168W idle with 128GB of 2400Mhz, and 8 x 3.5" spinning drives. I started with small desktop computers, but I ended up compromising about everything: ram, disks, cards. Everything was braking one after an other, mostly because heat I would think. I (and my family) was constantly annoyed by the outages, so now I invested in a proper rack in my garage. It’s sometimes noisy, it’s somewhat power hungry, but god… Professional hardware is so comfortable to work with. iDRAC, ipmi, very good temperature management, lot of room for upgrades, reliability, I wouldn’t go back to the nightmare of half-assed computer. I now run everything I can think of so smoothly that I rarely get complains from anyone. It’s not only from the hardware side to be honest. Using traefik has been a massive improvement to ease my reverse proxying. Finally getting rid of Truenas a huge relief. And switching from a hardcore 20 year long Gentoo user to a Portainer’s noob a clever move to finally get some time to use the services I host instead of messing around with hundreds of config files.

        By the way, I do not understand the huge paranoia about facing services to the internet. I’m happy to share my mail, websites, jellyfin, cloud services and what else to everyone interested. In the more than 30 years I’m online, I never been hacked in anyway. I might be lucky.

  • n4sdaq@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    2 months ago

    Been running Unraid for almost a year. I was previously running Windows with nearly zero insight into the health of my apps, RAID, etc. Made me very nervous. Unraid makes it all so easy.

    I’m running many of the apps you mentioned and the implementation of docker on Unraid is easy to install, update, etc. I used docker on Windows but it was not the same. I’m not a software dev, so I’m not sure why you said Unraid’s docker implementation is hacky. It seems good to me.

    The reason I switched to Unraid was I had to add more storage to my RAID and that was impossible in Windows without destroying the RAID and losing my data. I considered TrueNAS, but my understanding is the same is true. They’re supposed to be adding that capability soon ™️ but who knows when that’ll actually be available and reliable. Unraid let’s you add more storage whenever and the drives don’t have to match. I love the flexibility.

    I use Nginx Proxy Manager docker to access my apps externally. My SO is not tech savvy and after setting up the individual apps with the domain I have, it’s usually smooth sailing. If I ever need to do any mucking about with the server itself, I turn on UI teleport. I also have a PiKVM but have only needed to use it a couple times. It’s just not necessary with how reliable Unraid is.

    My server has an i5-6600 and utilizes QuickSync, which is great. More energy efficient than a dedicated GPU. I’ve considered adding a GPU but I haven’t run into a situation where I need it.

    Tldr: highly recommend Unraid.

  • Chaphasilor [he/him]@feddit.nl
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    I’ve gone the TrueNAS SCALE route myself, with TN running on bare metal. All my containers/apps are set up through it, and I’ve also spun up Windows and Linux VMs without major issues, including GPU and USB passthrough.

    I do enjoy the security it gives me, will all my apps being versioned/snapshotted regularly and before every update, as well as the rest of my data. Since TN is only using ZFS and not something like MergerFS (which I believe is used by Unraid), the upgrade path is a bit mote restricted. So you should definitely look into your options up-front. For example, you won’t be able to expand a vdev (virtual ZFS disk) later on, you’ll have to create a new one. And you can only use equivalent vdevs to form pools. That means if you start with 3 drives in a vdev for your main storage pool, you can only expand that pool by adding anothet 3 drives with the same capacity as a secons vdev. So make sure you can stomach these costs, or go for fewer and cheaper drives, with a large case.

    As for apps, you can set up docker apps easily, and there are a large number of officially or community-maintained apps, where any breaking changes and migrations are handled for you, so updating is a breeze. But you don’t have a much flexibility as with a custom setup. TN has been becoming more generic in that regard though, switching from k3s to regular docker, so you could probably play around with stuff via the CLI without major issues.

    Oh and one more thing: you should probably use a separate, dedicated device for Home Assistant. Use a Raspberry Pi or one of their official boards, and you’ll have better support, more features, redundancy, and can still create backups on your NAS via SMB.
    Such a second device that is also connected via Tailscale doesn’t hurt either, just in case.

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    2 months ago

    I just went with a plain boring Ubuntu box, because all the “purpose built” options come with compromises.

    Granted, this is about as hard-mode as this can get, but on the other hand I have 100% perfect support for any damn thing I feel like using, regardless of the state of support of whatever more specialized OS is for aforementioned thing.

    I probably wouldn’t recommend this if you’re NOT very well versed in Linux sysadmin stuff, and probably wouldn’t recommended it to anyone who doesn’t have any interest in sometimes having to fix a broken thing, but I’m 3 LTS upgrades, two hardware swaps, and a full drive replacement, and most of a decade into this build and it does exactly what I want, 95% of the time.

    I would say, though, that containerizing EVERYTHING is the way to go. Worst case, you blow up a single service and the other dozen (or two, or three…) keep right on running like you did absolutely nothing. I can’t imagine maintaining the 70ish containers I’m running without them actually being containers and/or without me being a complete nutcase that runs around the house half naked muttering about the horrors of updates.

    I’m not anti-Cloudflare, so I use a mix of tunnels, their normal proxy, as well as some rawdogging of services with direct port forwards/a local nginx reverse proxy.

    Different services, different needs, different access methods.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 months ago

      This is the way. I’m up since Ubuntu 14.04 LTS on this machine. Platform swapped from AMD Phenom, to Intel i7, to AMD Ryzen, now with a bigger Ryzen. SSDs from a single SATA, to NVMe, to a 512G NVMe mirror, to a 1G NVMe mirror. The storage went from a single 4T disk to an 8T mirror NAS, to 8T directly attached mirror, to 24T RAIDz, to 48T RAIDz. I’ve now activated the free Ubuntu Pro tier, so if Canonical is still around in 2032, this machine can operate for another 8 years with just hardware swaps on failure.

  • Atherel@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Was there 6 months ago, I’ll just share what I did: OS: I went with Unraid because you can mix different sized HDDs without loosing space, just make sure the parity disc is same size or bigger as the biggest one with data. I backup everything with duplicacy to a stupid nas (wd mybook i got 2nd hand) and to an external hosting via ssh.

    Most CPU is used for video transcoding so I went with a 12th gen i3 12100, it’s more than enough for my usage. Just don’t make the same error as I did… I really recommend a better cooler than the boxed one. It can get loud when unmanic starts to converting bigger videos to h265.

    My normal PC is fully team red as it just works better on Linux for gaming but for nas, 12th gen Intel seems to be the way to go as far as my research shows.

    I don’t use a gpu and the slot for it is used for a DELL perc h310 SAS controller in IT mode for more discs.

    Most services are not exposed and I use wireguard to access my server remotely. Single docker services are exposed with nginx reverse proxy manager and dyndns, my domain is set to resolve to local IP addresses when at home or through vpn, this way I can always use the same hostnames with valid certificates. I use a simple bash script in a cron job to update my dns zone.

    I have other hardware to play around and did work with proxmox and other solutions, but this NAS had to just work without lot of tinkering and I’m really happy with it.

  • Presi300@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 months ago

    I’m biased towards TrueNAS scale, because in my experience, it’s been really rock solid, running on bare metal. It also allows you to setup things like Nextcloud/Tailscale/ a lot more, in 1 click from their “app store”. It’s also got all the virtualization bells and whistles. As for ZFS, again, just like everything else, it’s been rock solid and setting up a ZFS pool is pretty much done for you when you install TN Scale.

    As for remote access, I’ve always personally done it via a local Wireguard server and can’t really compare it to tailscare or whatever cloudflare does… Because I’ve never used those.

    If you need a GPU just for encoding, go on the 2nd hand market and pick up a used Nvidia RTX 2000/3000 card. Intel Arc could also work, but it’s a bit quirky afaik…

  • ahal@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    2 months ago

    I’m currently using Unraid for pretty much every thing you listed, and I love it so much. I really appreciate being able to set up almost everything through the web interface. It makes my hobbies feel fun rather than just an extension of my day job.

    That said, I bought the licence before they switched to a subscription model. So if I were starting over I might look into free alternatives.