I’m trying to get back into self hosting. I had previously used Unraid and it worked well to run VMs where needed and Docker containers whenever possible. This biggest benefit is that there is an easy way to give each container it’s own IP so you don’t have to worry about port conflicts. Nobody else does this for Docker as far as I can tell and after trying multiple “guides”, none of them work unless you’re using some ancient and very specific hardware and software situation. I give up. I’m going back to Unraid that just works. No more Docker compose errors because it’s Ubuntu host is using some port requiring me to disable key features.

  • Meldrik@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    1 年前

    Got to disagree. Remember to enable “nesting” in your container, when running Docker.

    In Proxmox you give your LXC container an IP and then you use ports in Docker for you Docker containers.

    Unless I really have to use Docker, I install each service in an LXC container.

  • FalseDiamond@feddit.it
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 年前

    Proxmox doesn’t really do Docker containers well (yet, or maybe will at all). It does do LXC (both of those are OCI containers at heart), but that’s not as well supported or as versatile as Docker/Podman. I’m more than sure Unraid is great at what it does, but it’s not a VMWare killing virtualization solution in production like Proxmox is with its great support for redundancy, versatility and relative ease of use if you come from a Linux background. OTOH Proxmox is not Portainer. It’s for VMs and VM-like containers, at least for now. Supposedly kernel 6.something helps a lot with OverlayFS support in nested containers, but I can’t go to bleeding edge kernels in production to test that. Still, are you sure you need an IP per container?

    • johnnixon@rammy.siteOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 年前

      Pihole seems pretty unhappy about sharing an IP address/ports with it’s Ubuntu host, so yeah, I’m set on giving it it’s own IP.

      • FalseDiamond@feddit.it
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 年前

        More than fair. I do have a Proxmoxy solution if you want it, which is to run it as an LXC, but it does seem that something more container-oriented may be your best bet rather than sticking with proxmox if you don’t need the extra stuff it offers.

        Here’s an absolutely incredible resource when it comes to home running Proxmox LXCs: https://tteck.github.io/Proxmox/

        Pihole is offered (spelled Pi-hole), as well as a ton of other useful services.

        • johnnixon@rammy.siteOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 年前

          Yes, that was the problem. I got it running in a LXC and it worked fine. Docker remains a hot mess for 90% of what I’m trying to run.

          • somedaysoon@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 年前

            So are you talking about this singular conflict that is extremely simple to fix? Do you have any other examples?

            Because it most certainly isn’t a reason to use an annoying distro like Unraid or absurdly put each service on a separate IP address.

  • midnight@infosec.pub
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 年前

    I’m confused on why you need a unique IP per VM/container. You can change the “external” port in your docker compose and be fine.

    I initially tried unRAID on bare metal but hated not being able to use versions of docker I wanted and using stuff that wasn’t in the community repo.

    I currently run unRAID as a proxmox vm (passing through my lsi card and USB for the OS) and it works flawlessly. I didn’t even have to reinstall since I passed through the necessary components it used when it was bare metal.

    Ultimately, use what works best for you but I do have to disagree that proxmox/docker is inferior.

    • johnnixon@rammy.siteOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      7
      ·
      1 年前

      Sometimes you can’t change the external port because it has to be where it’s expected. Regarding being stuck in the community repo, try having up be restricted to what’s available for LXC documentation.

      I guess I could follow a 30 minute CLI procedure to spin up a container or I can run a command or two in Docker. If Docker simply had it’s networking straight without having to do Linux surgery with oven mits on this wouldn’t be a problem.

      • midnight@infosec.pub
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 年前

        Not saying I don’t believe you, but do you have any examples where changing the external port causes an issue? I change the port on almost every single docker container from what the default is. To be clear, I’m referring to the left side of the colon in the port declaration:

        
        ports:
              - 12080:80
        

        I should also clarify I don’t use LXC containers. My background had me more familiar with VMs so I went that route. I’ve never felt like I’m performing surgery when deploying containers, but I have seen other complaints around docker networking that I’ve apparently been lucky enough to avoid.

        Like I said though, do what works best for you. I don’t mind tinkering to get things tuned just right, which causes some friction with unRAID. I’ve invested enough time an energy for this where I just have to spin up a proxmox VM and pass the IP to a few Ansible playbooks I wrote to get to a healthy base state and then start deploying my docker containers. I recognize not everyone wants to do this though.

        • karlthemailman@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 年前

          Not saying I don’t believe you, but do you have any examples where changing the external port causes an issue? I change the port on almost every single docker container from what the default is.

          Same here. I can’t think of an instance when this hasn’t worked. Perhaps if you have multiple applications that depend on each other? But you can just put those in the same compose file.

  • Osayidan@social.vmdk.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 年前

    It’s always about choosing the right tool for the job/use case. If all you need is a machine with some storage and to run a few services and you like how unraid works then it’s the right tool.

    For a lot of other use cases it’s the complete opposite and unraid is seen as a pile of garbage.

  • myogg@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 年前

    If you’re prepared for headaches at the start then switching over to a ingress controller is the way to go.

    95% of my services run on a single IP address over Https with a valid certificate. I can add as many services as I want without worrying about IP conflicts or invalid certificates.

  • McSinyx@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 年前

    there is an easy way to give each container it’s own IP so you don’t have to worry about port conflicts

    I solve this by running services on the same OS and give them Unix sockets but I’m probably unhinged.

  • somedaysoon@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 年前

    I highly prefer OMV and TrueNAS Scale over Unraid… honestly, I would prefer to use almost anything else over it. I get it’s probably easier for some people, but being locked into that Unraid way of doing it… ahh no, I prefer to have a bit more freedom.