• 12 Posts
  • 712 Comments
Joined 2 years ago
cake
Cake day: June 22nd, 2023

help-circle


  • It looks like they ran the test case and triggered the crash. Therefore the issue is not confabulated.

    Also, I’m unconvinced that use of ffmpeg inside of Google services is relevant to this. Google services can sandbox executables as much as they like, and given the amount of transcoding they do (say for youtube), it would surprise me if they’re not using gpu or hardware transcoders instead of ffmpeg anyway. Instead, they may care more about ffmpeg as used in browsers, TV boxes, and that sort of thing. That puts them in the same position as the Amazon person who said the ffmpeg devs could kill 3 major [Amazon] product lines by sending an email.

    If a zillion cable boxes get pwned because of a 0-day in ffmpeg, well that’s unfortunate but at least they did their due diligence. But if they get pwned because the vendor knew about the vulnerability and decided to deploy anyway, that potentially puts the vendor on the hook for a ton more liability. That’s what “ffmpeg can kill 3 major product lines” means. So “send the email” (i.e. temporarily flag that codec as vulnerable and withdraw it from the default build), seems like a perfectly good response from ffmpeg.

    The Big Sleep article is really good, I read it a week or so ago, sometime after this thread had died down.














  • Maybe you could describe what you mean by self-hosted and resilient. If you mean stuff running on a box in your house connected through a home ISP, then the home internet connection is an obvious point of failure that makes your box’s internet connection way less reliable than AWS despite the occasional AWS probs. On the other hand, if you are only trying to use the box from inside your house over a LAN, then it’s ok if the internet goes out.

    You do need backup power. You can possibly have backup internet through a mobile phone or the like.

    Next thing after that is redundant servers with failover and all that. I think once you’re there and not doing an academic-style exercise, you want to host your stuff in actual data centers, preferably geo separated ones with anycast. And for that you start needing enough infrastructure like routeable IP blocks that you’re not really self hosting any more.

    A less hardcore approach would be use something like haproxy, maybe multiple of them on round robin DNS, to shuffle traffic between servers in case of outages of individual ones. This again gets out of self hosting territory though, I would say.

    Finally, at the end of the day, you need humans (that probably means yourself) available 24/7 to handle when something inevitably breaks. There have been various products like Heroku that try to encapsulate service applications so they can reliably restart automatically, but stuff still goes wrong.

    Every small but growing web site has to face these issues and it’s not that easy for one person. I think the type of person who considers running self-hosted services that way, has already done it at work and gotten woken up by PagerDuty in the middle of the night so they know what it’s about, and are gluttons for punishment.

    I don’t attempt anything like this with my own stuff. If it goes down, I sometimes get around to fixing it whenever, but not always. I do try to keep the software stable though. Avoid the latest shiny.




  • A high-cpu small machine will have noisy fans, there’s no avoiding that. The fans have to be of small diameter so they will spin at high RPM. Maybe you can say what you’re actually trying to run, and make things easier for us.

    I gave up on this approach a long time ago and it’s felt liberating. My main personal computer is a laptop and for a while I had a Raspberry Pi 400 running some server-like things. All my bigger computational stuff is remote. So the software is self-hosted but not the hardware. IDK if that counts as self-hosting around here. But it’s much more reliable that way, with the boxes in multiple countries for geo separation.