Hello I’ve been playing around with an old laptop as my home server for 1 year and I think that now it’s a good time to upgrade to something better since it feels a bit too slow.

I was thinking to buy a synology but I would prefer something custom because I hate that sometimes the manufacturers decide to abandon support or change all their terms of service.

My budget is about 1000$ USD, I’m looking for it to have at least 20TB and the option to later add a graphics card would be nice.

What do you recommend to buy? Also what software do you recomend? Also could it work with an n100 mini PC?

I’ve been using Ubuntu server, with docker containers for several services, but I mainly use it for Nextcloud

  • pezhore@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 month ago

    This is basically my homelab. Synology 1618 + 3x Lenovo M920Q systems with 1TB names. I upgraded to a 10gb fibre switch so they run Proxmox + Ceph, with the Synology offering additional fibre storage with the add on 10gb fibre card.

    That’s probably a few steps up from what the OP is asking for.

    Splitting out storage and computer is definitely good first step to increase optimization and increase failure resiliency.

    • mipadaitu@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Any idea what your power consumption is for the 1618? I currently have a 720, but with only two drives it’s kind of limiting for HDD upgrades.

      • pezhore@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 month ago

        Unfortunately, no - not specifically. I want to get a kilawatt monitor at one point. The best I can do is share my UPS’s reported power output - currently at around 202-216W, but that includes both my DS1618 and the DS415+ along with my Ubiquiti NVR and two of my Lenovo M920Qs.

        I should probably look at what adding the 5 bay external expansion would take power wise and maybe decommission the very aged 415

        Edit: this is also my annual reminder to finally hook up the USB port on my UPSs to… something. I really wanted to get some smart - “Oh shit there’s a power outage and we’re running low on reserves, intelligently and gracefully shut things off in this order”, but I never got around to it.

        • mipadaitu@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          If you’re running home assistant, you can put some inline power monitoring plugs in. I like the thirdreality ones, cause you can set them to “default on” or “default off” after power failure and run it as a zigbee local network without requiring internet access.

        • mipadaitu@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Oof, that’s a lot of juice.

          I’m running a UPS, Syno720+, old gaming laptop as a portainer host, my wifi, router, cable modem, and switches, and that’s only using about 50w for everything. Pretty sure the Synology is using the bulk of that power, but I don’t have data to back that up.

          I’d like to upgrade a few things, but I’m really trying to keep it below 75w. Ideally below 50w if I can. I think my old laptop is good for now, just want more flexibility in my NAS if I can do it without bumping up the power budget.

          • pezhore@infosec.pub
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            To be fair - both synologies are running big spinny NAS drives - I could reduce my capacity and my power usage by going with SSDs, but shockingly, I can’t seem to figure out what to cull in the 35TB combined storage.

            I am debating moving my Vault cluster from a Clusterhat to pods on my fresh kubes deployment - and if I virtualize Pihole, that would also reduce some power consumption. Admittedly, I’m going overboard on my “homelab” - it’s more of a full blown SMB at this point with Palo firewall and brocade 48p switch. I do infosec for a living though, and there’s reason to most of my madness.

    • voracitude@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      Splitting out storage and computer is definitely good first step to increase optimization and increase failure resiliency.

      Exactly why I’ve been considering doing it this way for my new setup! I had to leave my last one on the other side of the planet and have felt positively cramped with just a couple TB worth of internal drives, can’t wait to properly spread out again.

    • ddh@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 month ago

      I’m interested in how you like Ceph.

      My setup is similar, using a DS1522+ volume as shared block storage for an iSCSI SAN for three Proxmox nodes. Two nodes are micro PCs and the third is running on the 1522+. There’s a DS216j for backups.

      • pezhore@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Ceph is… fine. I feel like I don’t know it enough to properly maintain it. I only went with 10gbe because I was basically told on a homelab reddit that Ceph will fail in unpredictable ways unless you give it crazy speeds for it’s storage and network. And yet, it has perpetually complained about too many placement groups.

        1 pools have too many placement groups
        
        Pool tank has 128 placement groups, should have 32
        

        Aside from that and the occasional falling over of monitors it’s been relatively quiet? I’m tempted to use use the Synology for all the storage and let the 10GbE network be carved up into VM traffic instead. Right now I’m using bonded USB 1GbE copper and it’s kind of sketchy.

        • nickwitha_k (he/him)@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          I maintained a CEPH cluster a few years back. I can verify that speeds under 10GbE will cause a lot of weird issues. Ideally, you’ll even want a dedicated 10GbE purely for CEPH to do its automatic maintenance stuff and not impact storage clients.

          The PGs is a separate issue. Each PG is like a disk partition. There’s some funky math and guidelines to calculate the ideal number for each pool, based upon disks, OSDs, capacity, replicas, etc. Basically, more PGs means that there are more (but smaller) places for CEPH to store data. This means that balancing over a larger number of nodes and drives is easier. It also means that there’s more metadata to track. So, really, it’s a bit of a balancing act.