Not exactly self hosting but maintaining/backing it up is hard for me. So many “what if”s are coming to my mind. Like what if DB gets corrupted? What if the device breaks? If on cloud provider, what if they decide to remove the server?

I need a local server and a remote one that are synced to confidentially self-host things and setting this up is a hassle I don’t want to take.

So my question is how safe is your setup? Are you still enthusiastic with it?

  • FarraigePlaisteach@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 months ago

    I got tired of having to learn new things. The latest was a reverse proxy that I didn’t want to configure and maintain. I decided that life is short and just use samba to serve media as files. One lighttpd server for my favourite movies so I can watch them from anywhere. The rest I moved to free online services or apps that sync across mobile and desktop.

    • nyar@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      Caddy took an afternoon to figure out and setup, and it does your certs for you.

    • iso@lemy.lolOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Unfortunately, I feel the same. As I observed from the commenters here, self-hosting that won’t break seems very expensive and laborious.

    • BlackPenguins@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 months ago

      Reverse proxy is actually super easy with nginx. I have an nginx server at the front of my server doing the reverse proxy and an Apache server hosting some of those applications being proxied.

      Basically 3 main steps:

      • Setup up the DNS with your hoster for each subdomain.

      • Setup your router to port forward for each port.

      • Setup nginx to do the proxy from each subdomain to each port.

      DreamHost let’s me manage all the records I want. I point them to the same IP as my server:

      This is my config file:

      server {
          listen 80;
          listen [::]:80;
      
          server_name photos.my_website_domain.net;
      
          location / {
              proxy_pass http://127.0.0.1:2342;
              include proxy_params;
          }
       }
      
       server {
          listen 80;
          listen [::]:80;
      
          server_name media.my_website_domain.net;
      
          location / {
              proxy_pass http://127.0.0.1:8096;
              include proxy_params;
          }
      }
      

      And then I have dockers running on those ports.

      root@website:~$ sudo docker ps
      CONTAINER ID   IMAGE                          COMMAND                  CREATED       STATUS       PORTS                                                      NAMES
      e18157d11eda   photoprism/photoprism:latest   "/scripts/entrypoint…"   4 weeks ago   Up 4 weeks   0.0.0.0:2342->2342/tcp, :::2342->2342/tcp, 2442-2443/tcp   photoprism-photoprism-1
      b44e8a6fbc01   mariadb:11                     "docker-entrypoint.s…"   4 weeks ago   Up 4 weeks   3306/tcp                                                   photoprism-mariadb-1
      

      So if you go to photos.my_website_domain.net that will navigate the user to my_website_domain.net first. My nginx server will kick in and see you want the ‘photos’ path, and reroute you to basically http://my_website_domain.net:2342. My PhotoPrism server. So you could do http://my_website_domain.net:2342 or http://photos.my_website_domain.net. Either one works. The reverse proxy does the shortcut.

      Hope that helps!

      • andyburke@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        6 months ago

        🤷‍♂️ I could spend that two hours with my kids.

        You aren’t wrong, but as a community I think we should be listening carefully to the pain points and thinking about how we could make them better.

      • asbestos@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        fuck nginx and fuck its configuration file with an aids ridden spoon, it’s everything but easy if you want anything other than the default config for the app you want to serve

        • BlackPenguins@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          I only use it for reverse proxies. I still find Apache easier for web serving, but terrible for setting up reverse proxies. So I use the advantages of each one.

      • Alphane Moon@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 months ago

        I had a pretty decent self-hosted setup that was working locally. The whole project failed because I couldn’t set up a reverse proxy with nginx.

        I am no pro, very far from it, but I am also somewhat Ok with linux and technical research. I just couldn’t get nginx and reverse proxies working and it wasn’t clear where to ask for help.

  • LordCrom@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    6 months ago

    I have a rack in my garage.

    My advice, keep it simple, keep it virtual.

    I dumpster dove for hardware and run proxmox on hosts. Not even clustered, just simple stand alone proxmox hosts. Connect to my Synology storage device and done.

    I run next cloud for webDav contacts and calendar (fuck Google), it does photo and do. Storage. The next client is free from F-Droid for Android and works on debian desktops like a charm.

    I run Minecraft server

    I run home automation server

    I run a media server.

    Proxmox backs everything up on schedule

    All I need to do is get off-site backup setup for Synology important data and I’m all set.

    It’s really not as hard as you think if you keep it simple

  • root@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    I try to balance things between what I find enjoyable/ worth the effort, and what ends up becoming more of a recurring headache

  • irotsoma@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    Automate as much as possible. I rsync to both an online and home NAS for all of my hosted stuff, both at home and in the cloud. Updates for the OS and low level libraries are automated. The other updates are generally manual, that allows me to set aside time for fixing problems that updates might cause while still getting most of the critical security updates. And my update schedules are generally during the day, so that if something doesn’t restart properly, I can fix it.

    Also, whenever possible I assume a fair amount of time for updates, far beyond what it should actually take. That way I won’t be rushed to fix the problem and end up having to revert to a backup and find time later to redo it. Then most of the time I have extra time for analyzing stats to see if I can improve performance or save money with optimizations.

    I’ve never had a remote provider just suddenly vanish though I use fairly well known hosts. And as for local hardware, I just have to do without until I can buy a replacement. Or if it’s going to be some time, I do have old hardware that I could set up as a makeshift, temporary replacement like old desktop computers and some hardware that I use for experimenting like my Le Potato that isn’t powerful enough for much, but ok for the short term.

    And finally I’ve been moving to more container-based setups that are easier to get up and running again. I’ve been experimenting with Nomad, Docker Swarm, K3s, etc., along with Traefik and some other reverse proxies so o can keep the workers air-gapped for security.

  • thirdBreakfast@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 months ago

    I started as more “homelab” than “selfhosted” as first - so I was just stuffing around playing with things, but then that seemed sort of pointless and I wanted to run real workloads, then I discovered that was super useful and I loved extracting myself from commercial cloud services (dropbox etc). The point of this story is that I sort of built most of the infrastructure before I was running services that I (or family) depended on - which is where it can become a source of stress rather than fun, which is what I’m guessing you’re finding yourself in.

    There’s no real way around this (the pressure you’re feeling), if you are running real services it is going to take some sysadmin work to get to the point where you feel relaxed that you can quickly deal with any problems. There’s lots of good advice elsewhere in this thread about bit and pieces to do this - the exact methods are going to vary according to your needs. Here’s mine (which is not perfect!).

    • I’m running on a single mini PC & a Synology NAS setup for RAID 5
    • I’ve got a nearly identical spare mini PC, and swap over to it for a couple of weeks (originally every month, but stretched out when I’m busy). That tests my ability to recover from that hardware failure.
    • All my local workloads are in LXC containers or VM’s on Proxmox with automated snapshots that are my (bulky) backups, but allow for restoration in minutes if needed.
    • The NAS is backed up locally to an external USB that’s not usually plugged in, and to a lower speced similar setup 300km away.
    • All the workloads are dockerised, and I have a standard directory structure and compose approach so if I need to upgrade something or do some other maintenance of something I don’t often touch, I know where everything is with out looking back to the playbook
    • I don’t use a script or Terrafrom to set those up, I’ve got a proxmox template with docker and tailscale etc installed that I use, so the only bit of unique infrastructure is the docker compose file which is source controlled on Forgejo
    • Everything’s on UPSs
    • A have a bunch of ansible playbooks for routine maintenance such as apt updates, also in source control
    • all the VPS workloads are dockerised with the same directory structure, and behind NGINX PM. I’ve gotten super comfortable with one VPS provider, so that’s a weakness. I should try moving them one day. They are mostly static websites, plus one important web app that I have a tested backup strategy for, but not an automated one, so that needs addressed.
    • I use a local and an external UptimeKuma for monitoring, enhanced by running a tiny server on every instance that just exposes a disk free and memory free api that can be consumed by Uptime.

    I still have lots of single points of failure - Tailscale, my internet provider, my domain provider etc, but I think I’ve addressed the most common which would be hardware failures at home. My monitoring is also probably sub-par, I’m not really looking at logs unless I’m investigating a problem. Maybe there’s a Netdata or something in my future.

    You’ve mentioned that a syncing to a remote server for backups is a step you don’t want to take, if you mean managing your own is a step you don’t want to take, then your solutions are a paid backup service like backblaze or, physically shuffling external USB drives (or extra NASs) back and forth to somewhere - depending on what downtime you can tolerate.

  • brygphilomena@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    I work IT for my day job managing a datacenter and cloud infrastructure.

    I host mostly Plex, home assistant, and immich. Immich has its data backed up, I don’t care about Plex data. If it all dies, so be it.

    I have a server coloed that houses some websites and email, plus some random other things I’ve setup and tested. It’s got backups, and downtime is fine.

    If my self hosted stuff dies, it doesn’t matter. Nothing in my life ultimately relies on it.

  • hperrin@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    My setup is pretty safe. Every day it copies the root file system to its RAID. It copies them into folders named after the day of the week, so I always have 7 days of root fs backups. From there, I manually backup the RAID to a PC at my parents’ house every few days. This is started from the remote PC so that if any sort of malware infects my server, it can’t infect the backups.

  • MTK@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 months ago

    Don’t over think it, start small, a home server. Then add stuff, you will see that it’s not that crazy.

    I personally have just one home server that locally creates encrypted backups and uploads them to backblaze.

    This gives me the privacy I need as everything is on my server that I own while also having the backups on a big reliable company.

    It’s not perfect but it fits my threat model

  • namelivia@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 months ago

    My advice would be, be pragmatical, an error on a backup script I did not notice wiped the time tracking data I had been collecting on my self hosted database for over a year. I got really anxious at first, because of my mistake and because of the data lost. But at the end of the day… Who cares, life goes on, this is only a hobby.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 months ago

    Not safe at all. I look for robustness. I prefer thinking about things that do not break easily (like ZFS and RAIDZ) instead of “what could possibly go wrong”

    And I have never quite figured out how to do restores, so I neglect backups as well.

  • ancoraunamoka@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    6
    ·
    6 months ago

    First of all ignore the trends. Fuck docker, fuck nixos, fuck terraform or whatever tech stack gets shilled constantly.

    Find a tech stack that is easy FOR YOU and settle on that. I haven’t changed technologies for 4 years now and feel like everything can fit in my head.

    Second of all, look at the other people using commercial services and see how stressed they are. Google banned my account, youtube has ads all the time, the app for service X changed and it’s unusable and so on.

    Nothing comes for free in terms of time and mental baggage

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Docker is not a shill tech stack. It is a core developer tool that is certainly not required, but is certainly not fluff

    • Lem453@lemmy.ca
      link
      fedilink
      English
      arrow-up
      21
      ·
      6 months ago

      Yes, you should use something that makes sense to you but ignoring docker is likely going to cause more aggravation than not in the long term.

      • tuhriel@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        Yep, I went in this direction…until I gave in during a bare metal install of something…

        Docker is not hassle free but usually most setup guides for apps are much much easier with docker

        • barsquid@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 months ago

          Docker/Podman or any containerized solution is basically the easiest way to get really nice maintenance properties like: updating one app won’t break others, won’t take down the whole system, can be moved from machine to machine.

          Containers are a learning curve but I think very worth it for home setups. Compared to something like Kubernetes which I would say is less worth it unless you already know or want to learn Kubernetes.

          • kieron115@startrek.website
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            6 months ago

            Docker takes a lot of the management work out of the equation as many of the containers automatically update. Manual updates are as simple as recreating a container with a new image instead of your local one. I would like to add try running Portainer (a graphical management interface for Docker). Breaking out the various options into a GUI helped me learn the ins and outs of Docker better, plus if you end up expanding to multiple docker hosts you can manage them all from one console. I have a desktop, a laptop, and a RPi 4b all running various dockers and having a single pane for management is such a convenience.

            • Lem453@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 months ago

              Not to mention the advantage of infrastructure as code. All my docker configs are just a dozen or so text files (compose). I can recreate my server apps from a bare VM in just a few minutes then copy the data over to restore a backup, revert to a previous version or migrate to another server. Massive advantages compared to bare metal.

  • sloppy_diffuser@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    6 months ago

    Immutable Nixos. My entire server deployment from partitioning to config is stored in git on all my machines.

    Every time I boot all runtime changes are “wiped”, which is really just BTRFS subvolume swapping.

    Persistence is possible, but I’m forced to deal with it otherwise it will get wiped on boot.

    I use LVM for mirrored volumes for local redundancy.

    My persisted volumes are backed up automatically to B2 Backblaze using rclone. I don’t backup everything. Stuff I can download again are skipped for example. I don’t have anything currently that requires putting a process in “maint mode” like a database getting corrupt if I backup while its being written to. When I did, I’d either script gracefully shutting down the process or use any export functionality if the process supported it.

  • constantokra@lemmy.one
    link
    fedilink
    English
    arrow-up
    16
    ·
    6 months ago

    All of your issues can be solved by a backup. My host went out of business. I set up a new server, pulled my backups, and was up and running in less than an hour.

    I’d recommend docker compose. Each service gets its own folder inside your docker folder. All volumes are a folder in the services folder. Each night, run a script that stops all of them, starts duplicati, backs up to a remote server or webdav share or whatever, and then starts them back up again. If you want to be extra safe, back up to two locations. It’s not that complicated if it’s just your own services.

  • Andromxda 🇺🇦🇵🇸🇹🇼@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 months ago

    Not exactly self hosting but maintaining/backing it up is hard for me. So many “what if”s are coming to my mind. Like what if DB gets corrupted? What if the device breaks? If on cloud provider, what if they decide to remove the server?

    Backups. If you follow the 3-2-1 backup strategy, you don’t have to worry about anything.

  • terminhell@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    6 months ago

    Others have said this, but it’s always a work in progress.

    What started out as just a spare optiplex desktop and needing a dedicated box for Minecraft and valheim servers, to now having a rack in my living room with a few key things I and others rely on. You definitely aren’t alone XD

    Regular, proactive work goes a long way. I also stated creating tickets for myself, each with a specific task. This way I could break things down, have reminders of what still needs attention, and track progress.

    • barsquid@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Do you host your ticketing system? I’d like to try one out. My TODO markings in my notes app don’t end up organized enough to be helpful. My experience is with JIRA, which I despise with every fiber of my being.

      • monomon@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        I have set up forgejo, which is a fork of gitea. It’s a git forge, but its ticketing system is quite good.

        • barsquid@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          Oh neat, I was actually planning to set that up to store scripts and some projects I’m working on, I’ll give the tickets a try then.

        • barsquid@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          We built Vikunja with speed in mind - every interaction takes less than 100ms.

          Their heads are certainly in the right place. I’ll check this out, thank you!