Hi, so I’ve ended up bagging myself a big supermicro server. I’m wanting to try out a little bit of everything with it, but one thing I really want is to be able to have services that haven’t been used for a bit to stop or sleep. And then to wake up again or start up on request, rather than me having manually stop and start services. Is that a thing?

I know of portainer and whatnot, but I’m wondering if anyone has any advice on this.

I’m planning on putting debian on it i think (unless someone can convince something else is better suited - i usually use arch on my personal devices btw 😜)

Also i know some basics on raid but I’ve only ever messed with raid0 with usb drives on a pi. I have 8 bays but 2 are currently vacant. What is the process of just adding an extra drive to a raid, or replacing one that already exists?

  • Chewie@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    15 hours ago

    Also i know some basics on raid but I’ve only ever messed with raid0 with usb drives on a pi. I have 8 bays but 2 are currently vacant. What is the process of just adding an extra drive to a raid, or replacing one that already exists?

    It depends on your RAID controller (or software RAID). I use hardware RAID (on Dell and HP servers) as it’s easy and a known technology, although these days people seem to be anti-HW RAID a bit.

    When replacing a drive, you just eject the old drive, wait a few seconds put the new drive in, and most HW RAID controllers will start automatically rebuilding the array. Make sure your controller and drive bays support “hot swap” first! With HW RAID, replacing drives is great, because you can increase the capacity over time, because you can replace each drive with a bigger model, and once the last drive has been swapped over, you can expand the array and start using the extra capacity without having to move data around. With HW raid, most servers have an “Out-Of-Band” system (iLO, iDRAC, IPMI) which you can configure to alert you if a drive has died (or is about to die).

    I would recommend keeping at least 1 spare of the same model HD of whatever you use, just in case.

    I got burned by having a WD drive fail, and WD were being assholes about sending me a replacement (it was under warranty). Before I got the replacement, another drive started dying, and I couldn’t afford to buy another drive. In the end I lost 12TB of data 😭

    And re the above - “RAID is not a backup” :) plan accordingly…

    For software RAID, most Linux OSes support it automatically. I only use it as it’s easy to expand partitions (most of my Linux machines are VMs on a system with HW RAID).

    This might be a useful article https://www.howtogeek.com/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ (with a link to a previous one which is an introduction), which explains a bit about SW RAID.

  • tofu@lemmy.nocturnal.garden
    link
    fedilink
    arrow-up
    5
    ·
    10 days ago

    If you want to have VMs as well, Proxmox is the to-go thing in selfhosting. Maybe your supermicro even has two network interfaces and can have a virtualized firewall or the like.

    Not quite sure about your services go sleep thing. Ideally, services won’t use much CPU while idling, but certainly RAM. You can probably build something like you described, but it’s mostly not “a thing” afaik.

    • MrScottyTay@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 days ago

      Ah so i might be thinking leaving a service running is worse than it actually is then?

      The motherboard has two networks ports and a card with another two. There’s also some fibre ports but i imagine I’ll never end up using them haha.

      I don’t actually really know much about firewalls at all yet though

        • Chewie@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 hours ago

          I rate OPNsense. I’ve not tried pfsense, but I use Enterprise-level firewalls daily. When you’re used to Palo Alto, Cisco or CheckPoint firewalls, it is a lot harder to use, and the interface isn’t great, and had fewer features, but for free (and cheap support if you need it), it’s pretty amazing. Upgrading to new versions is seamless, and once when something happened and it broken, I reinstalled it from the .ISO, uploaded my backed up .xml config file, and it was back to normal. It’s more than adequate to use for my home internet connection and all the services I run in my DMZ etc.

  • ThorrJo@lemmy.sdf.org
    link
    fedilink
    arrow-up
    2
    ·
    10 days ago

    inetd, xinetd et al were how this was done back in the day.

    many services use very little energy when they are not actively being used. that’s definitely not true across the board though.

    I echo the suggestion of Proxmox.

  • solariplex@slrpnk.net
    link
    fedilink
    arrow-up
    2
    ·
    10 days ago

    Jerboa crashed mid-comment so i’ll be brief.

    Save yourself pain and increase your happiness by

    • using btrfs or zfs (snapshots, checksum and self-healing is great)
    • using declarative approach rather than imperative, and keep a copy of configs elsewhere (I accidentally nuked my system multiple times, you should expect to do the same)
    • keeping backups. If zfs, https://github.com/jimsalterjrs/sanoid and syncoid are great https://discourse.practicalzfs.com/t/setting-up-syncoid-for-offsite-backup/1611
    • have an extra tiny machine running the same system and workloads, where you test potentially risky stuff before doing so on the prod server
    • metrics solutions like prometheus and grafana are your friend
  • ctry21@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    10 days ago

    I think portainer is probably the best tool for this since you can easily go in and pause/start services as required. Just make sure to go into the containers on portainer and check the restart policy is set to “unless stopped” so you don’t get unwanted restarts after a reboot or anything like that.

    I don’t think portainer has any automation options but you could possibly write a short cron script to run docker compose down in the directory of each compose file to shut them down once a month, and pair that with the uptime kuma container to get a notification when your containers are down so you can go into portainer and restart the ones you still need. Though I’ve never had any real issue with running lots of containers at once – there’s 20 on my raspberry pi right now and it’s still got just over a gigabyte of RAM left.

  • greengnu@slrpnk.net
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    You write up a procedure for the setup of your server and any virtual machines contained within.

    Using declarative Distros makes the procedure shorter and easier to maintain in the long run.

    Then you use it to setup your system (fixing issues in your procedure along the way)

    Then wipe and do it again (this time should be done without issue or you may need another spin)

    Then slowly grow your documentation and what services you have running.