I’m in the process of wiring a home before moving in and getting excited about running 10g from my server to the computer. Then I see 25g gear isn’t that much more expensive so I might was well run at least one fiber line. But what kind of three node ceph monster will it take to make use of any of this bandwidth (plus run all my Proxmox VMs and LXCs in HA) and how much heat will I have to deal with. What’s your experience with high speed homelab NAS builds and the electric bill shock that comes later? Epyc 7002 series looks perfect but seems to idle high.
I have four Raspberry Pi 4 running, so that’s 15W max each or 60W max total. Usually they consume much much less.
For my main server only… If HP iLO is to be believed, averaging around 130W.
Running: deluge, homarr, jellyfin, lidarr, navidrome, nextcloud, prowlarr, sonarr, whoogle and a minecraft server (VM) on TrueNAS Scale.
As for everything else (my router, switch and DNS/DHCP server, which is a separate machine, you can add another maybe 50W on top of it…
5 node proxmox cluster (each node on 40gbps networking[yes ceph…], ~80TB of SSD storage, 180cores, ~630GB of ram total)
1 slow storage node (~400TB)
2x opnsense servers in HA
2x icx7750s
2x icx7450sPoE to all the things… and 8gbps internet.
Usually run ~15-17amps. So about 2000 watts. It’s my baby datacenter.
Sometime this month I’ll be installing 25000kwh solar system on my roof and batteries.
As far as heat goes… It’s in the garage with an insulated door, heat pump water heater, and there’s a tripplite ac unit in the bottom of the rack. The waste air(from the a/c) exhausts outside through a direct vent in the wall. The garage is downright tolerable to me for extended periods of time. The servers don’t complain at all.
Reading about all you guys being under 200w or whatever makes me wonder if it’s worth it. Then I realize that the cost to do even a 1/4 of what I do in the cloud is more expensive than buying my solar.
Power costs for the rack is about $100-120 a month. If it wasn’t for solar.
Edit: 75 LXC containers, 22VMs.
Edit: 75 LXC containers, 22VMs.
That’s a lot of power draw for so few VMs and containers. Any particular applications running that justify such a setup?
That’s total draw of the whole rack. No indicative of power per vm/lxc container. If I pop onto management on a particular box it’s only running at an average of 164 watts. So for all 5 processing nodes it’s actually 953 watts (average over the past 7 days). So if you’re wanting to quantify it that way, it’s about 10W per container.
Truenas is using 420 watts (30 spinning disks, 400+TiB raw storage…closer to 350 usable. Assuming 7 watts per spinning drive were at 210Watts in disks alone, spec sheet says 5 at idle and 10 at full speed). About 70 watts per firewall. Or 1515 for all the compute itself.
The other 1000-ish watts is spent on switches, PoE (8 cameras, 2 HDHR units, time server and clock module,whatever happens to be plugged in around the house using PoE). Some power would also be lost to the UPS as well because conversions aren’t perfect. Oh and the network KVM and pullout monitor/keyboard.
I think the difference here is that I’m taking my whole rack into account. Not looking at the power cost of just a server in isolation but also all the supporting stuff like networking. Max power draw on an icx7750 is 586Watts, Typical is 274 according to spec sheet. I have 2 of them trunked. Similar story with my icx7450s, 2 trunked and max power load is 935W each, but in this case specifically for PoE. Considering that I’m using a little shy of 1k on networking I have a lot of power overhead here that I’m not using. But I do have the 6x40gbps modules on the 7750.
With this setup I’m using ~50% of the memory I have available. I’m 2 node redundant, and if I was down 2 nodes I’d be at 80% capacity. Enough to add about 60GB more of services before I have to worry about shedding load if I were to see critical failures.
Damn that’s a setup alright!
If you’re making use of the hardware it’s well worth it over anything cloud based for sure.
Just out of curiosity, what do you use all that storage for?
On the Sata SSD ceph storage. That’s just live stuff on the containers/vms. I’m at 20% usage of the 70TiB usable at the moment. I don’t use it all that heavily. Because of the way ceph works it’s really ~23 TiB of usable space and ~4.5 TiB written since it writes 3 copies in my cluster.
On the slow storage node it’s running Truenas with 28 spinning disks at 16TB each. 2 hot spares, and 2 ssds each for cache, log, and metadata (eating up total of 36 bays). That’s 342.8TiB usable after raidz nonsense. And I’m 56% usage. I have literally everything I’ve done that I cared to save from like 2005 or 2006 or so. Backups for the ceph storage (PBS). Backups for computers I’ve had over the years. Lots of linux ISOs(105TiB) archived, including complete sets of gaming (37TiB) variants. Oh and my full steam library as well which currently sits at 14TiB. Flashpoint takes up a few TiB as well…
15w Raspberry pi 4 + HDDs
Actually if anyone has advice I’d love to hear. My server is a b450 mobo with an athlon 320ge. Even with no hard drives spinning it uses 65watts. I don’t understand how it is possible. 6 hard drives bring it to 85-90. Running truenas if it matters
I keep it off most of the time to safe power, I was expecting especially with such a cou much lower wattage, like under 20 idle. My assumption is the power supply can’t get low enough.
What CPU governor are you using? I saved about 40W idle powerdraw switching to powersave vs the default on a Ryzen 9 3900X.
Is that a bios or truenas setting? The CPU is only 35 watt max
Operating system so TrueNAS in your case
About 120W total for:
- 2 Proxmox hosts with 4 spinning disks between them
- Opnsense firewall
- 24 port GbE switch
- Fiber ONT
- Unifi AP
About 30 watts for a old Lenovo Thinkcentre with a i5-6500T and 8 GB RAM in combination with a DAS and 2x2TB HDD’s. I’m currently waiting for parts for my new server I’m building, a small N100 Mini-ITX board with 4x4TB HDD’s that hopefully has a similar power consumption.
I have a small setup for some self hosted apps and media.
- Beelink Mini S.
- 2 external 5TB drives.
- A USB fan used as an exhaust because the SSD inside gets a bit warm.
I think total power is about 30W.
Systems themselves are all around 5-20W, although the ones with mechanical HDDs obviously add their own idle usage.
Around 100 Watts for
- NAS with 4x3.5" HDD,
- Minisforum HM90 for Proxmox with 2x2.5" HDDs,
- 16 Port TP Link PoE Switch,
- TP Link router
- 2x Raspberry Pi 4b
But everything with gigabit speed. Doesnt need more at home
deleted by creator
Inside the bottom tray you have two cutouts in order to put two 2.5" HDDs/SSDs in
deleted by creator
- Fujitsu motherboard
- Intel pentium G5600
- 6 HDD (4 x 4 TB 2 x 8 TB) spinned down
- 2 SSD for proxmox
- 6 CT and no VM for now
it runs at 16W mostly idle
My pi costs probably around 20 a year lol.
125W (Less than $15/month) or so for
- Ryzen 9 3900X
- 64GB RAM
- 2x4TB NVMe (ZFS Mirror)
- 5x14TB HDD (ZFS RAID-Z2)
- 2.5GBe Network Card
- 5-port 2.5GBe Network Switch
- 5-port 1GBe POE Network Switch w/ one Reolink Camera attached
I generally leave
powerManagement.cpuFreqGovernor = "powersave"
in my Nix config as well, which saves about 40W ($4/mo or so) for my typical load as best as I can tell, and I disable it if I’m doing bulk data processing on a time crunch.My real server (Nextcloud/NAS/several more vm’s) uses 28 Watts on average. In addition, there is one Pi 4B running, and I don’t even know it’s wattage.
I’m planning on replacing the real server with a new one, with lots of cores and approx. 50 Watts then.
Pi4 tend to stick around 5w
I’ve got a 3 node Proxmox/ceph cluster with 10G, plus a separate Nas. They are all rack mount with dual PSU. Add in the necessary switching, and my average load is about 800w. Throw my desktop (also on 10G) into the mix and it runs 1.1kw.
That’s roughly $50-60 extra in electricity costs for me monthly.
Would be around 300€ in Germany, on a cheap contract. Limiting myself to one combined NAS/application server atm, with the others turned on only if I want to try sth out.
deleted by creator
Average load 800W is 0.8kW24h30d=576kWh/M
Which is over 172€ on a 30ct/kWh contract.
Wow! I’m paying 10.5¢/kWh for electricity at home here in the US; it’s a little below the national average but not dramatically.
Yeah, we pay a lot. We also got one of the lowest downtimes regarding electricity, on average approximately 10minutes per year…so that’s kind of a (small) advantage you get for the premium price
I’m afraid of dumping 500+ watts into a (air conditioned) closet. How are you able to saturate the 10g? I had some idea that ceph speed is that of the slowest drive, so even SATA SSDs won’t fill the bucket. I imagine this is due to file redundancy not parity/striping spreading the data. I’d like to stick to lower power consumer gear but ceph looks CPU, RAM, and bandwidth (storage and network) hungry plus low latency.
I ran proxmox/ceph over 1GB on e-waste mini PCs and it was… unreliable. Now my NAS is my HA storage but I’m not thrilled to beat up QLC NAND for hobby VMs.
My 10G is far from saturated, but I do try and keep things using RAM where possible. I figure that with 100gb of DDR4 in my main server, that should be able to provide enough speed for a 10G link.
I’ve got ceph running on Intel Enterprise SSDs, so they are pretty quick.
I also tried running ceph on 1G. I found it unreliable as well.
I ise about the same. But that is more due to the hardware I got being a bit older. 2 dell R710s 1 R510 and a custom build server. Everything is still 1g. In my case electricity is not a big deal due to solar. We produce much more then we can use our self.