I’m having trouble staying on top of updates for my self hosted applications and infrastructure. Not everything has auto updates baked in and some things you may not want to auto update. How do y’all handle this? How do you keep track of vulnerabilities? Are there e.g. feeds for specific applications I can subscribe to via RSS or email?
Thank you everyone for the helpful replies!
GitOps + Renovate.
Tools that allow you to work GitOps (everything is defined in text files in Git) are:
- Kubernetes
- NixOS
- to a lesser degree, Ansible
Here’s a nice starter template for running your own Kubernetes cluster via GitOps with Renovate pre-configured: https://github.com/onedr0p/cluster-template
For my docker containers I use what’s up docker which not only alerts me when there is an update but also give a link to the changes, so I can have a look what’s happening !
For my system itself… Just doing
sudo pacman -Syu
. Though that’s not great, cause some updates can potentially break my EndeavourOS system… I keep sometimes an eye on the forum when I see some critical changes like the kernel itself or nvidia updates though.There are some tools to help, but things are sort of specific to particular aspects. Lynis for general systems, ntopng for networks, and such.
For 90% of stuff, though, you can just stick to stable repos and upgrade on a schedule and you’ll be alright.
upgrade all things by default
This is a bad idea for a number of reasons. Most obvious issue is that it doesn’t guarantee anything in the way of actually fixing vulnerabilities, because some project you use may not even be scanning their own work.
what’s the alternative? Write a PR yourself?
Yup. Really easy in most cases if you’re just upgrading a dependency version of something to the next minor release up, but then it has to pass all the project CI tests, and get an actual maintainer to tag it for release. That’s how open source works though.
That may work for a handful of projects. It’d be my full time job if I did it for everything I run. Also, I might simply suggest maintainers to adopt dependabot or an alternative before I spend time with manual changes. These things should be automated.
Well a PR means an upstream fix for the project. If you want to scan all your local running things, by all means change whatever you want, but it will just be potentially wiped out by the tool you mentioned if running.
dependabot is a tool for repos, not to apply local changes
I’m aware, but then you mentioned “manual changes”, which connotes “local changes”. Putting up a PR with changes isn’t considered a manual anything.
This is also a great way to just break everything you’ve set up.
that’s a lot of FUD, topgrade just upgrades using all package managers you have, it doesn’t do the upgrades itself bypassing the manager that installed it, or package authors.
The issue is more that trying to upgrade everything at the same time is a recipe for disaster and a troubleshooting nightmare. Once you have a few interdependent services/VMs/containers/environments/hosts running, what you want to do is upgrade them separately, one at a time, then restart that service and anything that connects to it and make sure everything still works, then move on to updating the next thing.
If you do this shotgun approach for the sake of expediency, what happens is something halfway through the stack of upgrades breaks connectivity with something else, and then you have to go digging through the logs trying to figure out which piece needs a rollback.
Even more fun if two things in the same environment have conflicting dependencies, and one of them upgrades and installs its new dependency version and breaks whatever manual fix you did to get them to play nice together before, and good luck remembering what you did to fix it in that one environment six months ago.
It’s not FUD, it’s experience.
I’ve been doing that for years. Rollbacks are very rare, to the point that it doesn’t make much of a difference whether I do them all at once or not, other than spending more time to do it.
If I wasn’t using containers for everything, sure. Otherwise it’s a bit of an excessive concern.
I have stuff in new releases.io and also GitHub release RSS feeds in nextcloud, I then sit down once a week and see what needs an update. Reboot when required.
95% of things I just don’t expose to the net; so I don’t worry about them.
Most of what I do expose doesn’t really have access to any sensitive info; at most an attacker could delete some replaceable media. Big whoop.
The only thing I expose that has the potential for massive damage is OpenVPN, and there’s enough of a community and money invested in that protocol/project that I trust issues will be found and fixed promptly.
Overall I have very little available to attack, and a pretty low public presence. I don’t really host any services for public use, so there’s very little reason to even find my domain/ip, let alone attack it.
You should try wireguard if you haven’t before, like a breath of fresh air
Does badly count as a way?
I kinda keep an eye on that https://selfh.st/ post that does a weekly roundup of stuff to know when I need to do patching.
No doubt there is a container I could run that would do it for me. I just can’t remember the name of it.
I don’t.
Yeah, hot take, but basically there’s no point to me having to keep track of all that stuff and excessively worry about the dangers of modernity and sacrifice the spare time I have on watching update counter go brrrr of all things, when there’s entire peoples and agencies in charge of it.
I just run
unattended-upgrades
(on Debian), pin container image tags to only the major version number where available, run rebuild of containers twice a week, and go enjoy the data and media I built the containers and installed for software for.I think the problem is that a lot of people are just running flatpaks, dockers, and third party repos which might not be getting timely updates.
I try to stick to debian packages for everything as much as possible for this reason.
Regarding things like dockers and flatpaks, I mostly “solve” it by only running official images, or at least images from the same dev as the program, where possible.
But also IMO there’s little to no reason to fear when using things like flatpaks. Most exploits one hears of nowadays are of the kind “your attacker needs to get a shell into your machine in the first place” or in some cases evn “your attacker needs to connect to an instance of a specific program you are running, with a specific config”, so if you apply any decent opsec that’s already a v high barrier of entry.
And speaking of Debian, that does bring to mind the one beef I have with their packaging system: that when installing a package it starts the related services by default, without even giving you time to configure them.
Most critical infrastructure like my mail i subscribe to the release and blog rss feed. My OSs send me Update notifications via Mail (apticron), those i handle manual. Everything else auto updates daily.
You still need to check if the software you use is still maintained and receives security updates. This is mostly done by choosing popular and community drive options, since those are less likely to get abandoned.
That’s the neat part. I don’t!
I have automatic updates on everything, but if I actually spent time managing updates and vulnerabilities I’d have no time to do anything else in my life.
i subscribe to the release page of the repo in my rss reader. simple and effective.
That is a fantastic idea. Wtf how is this not commonplace? Or am I just way behind 😅
I’ve just started to delve into Wazuh… but I’m super new to vulnerability management on a home lab level. I don’t do it for work so 🤷🏼♂️
Anyways, best suggestion is to keep all your containers, vms, and hosts updated best you can to remediate vulnerabilities that are discovered by others.
Otherwise, Wazuh is a good place to start, but there’s a learning curve for sure.
Unless you have actual tooling (i.e. RedHat erratas + some service on top of that), just don’t even try.
Stop downloading random shit from dockerhub and github. Pick a distro that has whatever you need packaged, install from the repositories and turn on automatic updates. If you need stuff outside of repos, use first party packages and turn on auto updates. If there aren’t any decent packages, just don’t do it. There is a reason people pay RedHat a shitton of money, and that’s because they deal with much of this bullshit for you.
At home, I simply won’t install anything unless I can enable automatic updates. Nixos solves much of it. Two times a year I need to bump the distro version, bump the nextcloud release, and deal with depreciations, and that’s it.
I also highly recommend turning on automatic periodic reboots, so you actually get new kernels running…
I just update every month or two, or whenever I remember. I use Docker/podman, and I set the version to whatever minor release I’m using, and manually bump after checking the release notes to look for manual upgrade steps.
It usually takes 5 min and that’s with doing one at a time.