• 8 Posts
  • 93 Comments
Joined 2 years ago
cake
Cake day: September 7th, 2023

help-circle








  • I’m so tired of reading this stupid argument. “People only dislike systemd because they’re afraid of change.” No, there are plenty of other concerning issues about it. I could probably write about a lot of problems with systemd (like the fact that my work laptop never fucking shuts down properly), but here’s the real issue:

    Do you really think it’s a good idea for Red Hat to have total control over the most important component of every mainstream distro in existence?

    Let’s consider an analogy: in 2008, Chrome was the shit. Everyone loved it, thought it was great and started using it, and adoption reached ~20-30% overnight. Alternatives started falling by the wayside. Then adoption accelerated thanks to shady tactics like bundling, silently changing users’ default browser, marketing it everywhere and downranking websites that didn’t conform to its “standards” in Google search. And next, Chrome adopted all kinds of absurdly complex standards forcing all other browser engines to shut down and adopt Chrome’s engine instead because nobody could keep up with the development effort. And once they achieved world domination, then we started facing things like adblockers being banned, browser-exclusive DRM, and hardware attestation.

    That’s exactly what Red Hat is trying to pull in systemd. Same adoption story - started out as a nice product, definitely better than the original default (SysVInit). Then started pushing adoption aggressively by campaigning major distros to adopt it (Debian in particular). Then started absorbing other standard utilities like logind and udev. Leveraging Gnome to push systemd as a hard dependency.

    Now systemd is at the world domination stage. Nobody knew what Chrome was going to do when it was at this point a decade ago, but now that we have the benefit of hindsight, we can clearly see that monoculture was clearly not a good idea. Are people so fucking stupid that they think that systemd/Red Hat will buck that trend and be benevolent curators of the open source Linux ecosystem in perpetuity? Who knows what nefarious things they could possibly do…

    But there are hints, I suppose. By the way, check out Poettering’s new startup: https://news.ycombinator.com/item?id=46784572



  • It was developed and released during a time where people obsessed with touch interfaces thanks to deficient computing devices like phones and tablets. So many people were wholly convinced that these things were going to completely replace general purpose computing, so projects like Gnome, which were being run by Red Hat, had to follow along one way or another, though they probably did so willingly.

    In any case, I am SO glad those days are over. It was far, far worse than the AI hype that we have to put up with today.


  • I don’t think you understand the implications of what you’re suggesting.

    Forking a project as large as Gnome is a massive undertaking. Not only is it a lot of up-front work to implement the functionality, but you also have to stay up-to-date with all upstream changes, and there’s likely at least a few Gnome developers that are paid to work on it full-time, so that is a lot to maintain. And not only do you have to build it for your own distro, but you also have to convince maintainers of other distros to adopt it as well and put it in their repositories, otherwise you have no community of users, which means no community of developers either.

    Forking Gnome is wildly impractical. It’s not a feasible suggestion to make at all.






  • I haven’t read through the other responses in the thread, but I don’t think it’s the slightly old software that’s the problem. I think it has more to do with using older kernels, meaning that the latest hardware won’t always be supported (on the stable branch at least - there’s always testing and unstable too of course which may have better hardware support).

    That may have changed with recent releases though - I haven’t used Debian for several years now. But if your hardware is supported then it’s a pretty solid choice.

    Some other people sometimes mention that Debian isn’t as beginner friendly as Ubuntu or Mint, but my experiences have been similar to yours - I found Debian to more user-friendly than Ubuntu for example. Assuming that the hardware works of course - if it doesn’t then it obviously is a worse choice.


  • The number-one frustration, cited by 45% of respondents, is dealing with “AI solutions that are almost right, but not quite,” which often makes debugging more time-consuming. In fact, 66% of developers say they are spending more time fixing “almost-right” AI-generated code.

    Not surprising at all. When you write code, you’re actually thinking about it. And that’s valuable context when you’re debugging. When you just blindly follow snippets you got from some random other place, you’re not thinking about it and you don’t have that context.

    So it’s easy to see how this could lead to a net productivity loss. Spend more time writing it yourself and less time debugging, or let something else write it for you quickly, but spend a lot of time debugging. And on top of it all, no consideration of edge cases and valuable design requirement context can also get lost too.


  • Yeah, I’ve seen a lot of those videos where they do things like {} + [], but why would anyone care what JS does in that case? Unless you’re a shit-ass programmer, you’re never going to be running code like that.

    By this same logic, memory safety issues in C/C++ aren’t a problem either, right? Just don’t corrupt memory or dereference null pointers. Only “a shit-ass programmer” would write code that does something like that.

    Real code has complexity. Variables are written to and read from all sorts of places and if you have to audit several functions deep to make sure that every variable won’t be set to some special value like that, then that’s a liability of the language that you will always have to work around carefully.



  • One principle I try to apply (when possible) comes from when I learned Haskell. Try to keep the low-level logical computations of your program pure, stateless functions. If their inputs are the same, they should always yield the same result. Then pass the results up to the higher level and perform your stateful transformations there.

    An example would be: do I/O at the high level (file, network, database I/O), and only do very simple data transformations at these levels (avoid it altogether if possible). Then do the majority of the computational logic in lower level, modular components that have no external side effects. Also, pass all the data around using read-only records (example: Python dataclasses with frozen=True) so you know that nothing is being mutated between these modules.

    This boundary generally makes it easier to test computational logic separately from stateful logic. It doesn’t work all the time, but it’s very helpful in making it easier to understand programs when you can structure programs this way.