

It places you one year ago before they rebranded in rocq (obviously to stop the puns)


It places you one year ago before they rebranded in rocq (obviously to stop the puns)
If you are writing a parser in haskell just use Happy and get it over with


I experience something similar on a vega56, but it doesn’t happen on generic high loads, it happens only


I like many of your points, but your comment is facetious.
You said it yourself, “it’s good for someone trying to bang out scripts”… and that’s it, that’s the main point, that’s the purpose of python. I will argue over my dead body that python is a trillion times better than sh/bash/zsh/fish/bat/powershell/whatever for writing scripts in all aspects except availability and if that’s a concern, the only options are the old Unix shell and bat (even with powershell you never know if you are stuck ps 5 or can use ps 7).
I have a python script running 24/7 on a raspberry that listens on some mqtt topics and reacts accordingly asynchronously. It uses like 15kiB (literally less than 4 pages) of ram mostly for the interpreter, and it’s plenty responsive. It uses about two minutes of CPU time a day. I could have written it in rust or go, I know enough of both to do it, it would have been faster and more efficient, but it would have taken three times the time to write, and it would have been a bitch to modify, I could have done it in C and it would have been even worse. For that little extra efficiency it makes no sense.
You argue it has no place in mainstream software, but that’s not really a matter of python, more a matter of bad software engineers. Ok, cool that you recognise the issue, but I’d rather you went after the million people shipping a full browser in every GUI application, than to the guys wasting 10 kiB of your ram to run python. And even in that case, it’s not an issue of JavaScript, but an issue of bad practices.
P.S. “does one thing well” is a smokescreen to hide doing less stuff, you shouldn’t base your whole design philosophy on a quote from the 70s. That is the kind of shit SystemD hater shout, while running a display server that also manages input, opengl, a widget toolkit, remote desktop, and the entire printer stack. The more a high profile tool does, the less your janky glue code scripts need to do.


I’ll be honest, I think modern python is cool. You just need to accept that it has some limitations by design, but they mostly makes sense for its purpose.
It’s true that the type system is optional, but it gets more and more expressive with every version, it’s honestly quite cool. I wish Pylance were a bit smarter though, it sometimes fails to infer sum types in if-else statements.
After a couple large-ish personal projects I have concluded that the problem of python isn’t the language, but the users.
On the other hand, C’s design is barren. Sure, it works, it does the thing, it gives you very low level control. But there is nothing of note in the design, if not some quirks of the specifications. Being devoid of innovation is its strength and weakness.
In this context “weight” is a mathematical term. Have you ever heard the term “weighted average”? Basically it means calculating an average where some elements are more “influent/important” than others, the number that indicates the importance of an element is called a weight.
One oversimplification of how any neural network work could be this:
Training an AI means finding the weights that give the best results, and thus, for an AI to be open-source, we need both the weights and the training code that generated them.
Personally, I feel that we should also have the original training data itself to call it open source, not just weights and code.
I would like to interject for a moment. This statement is technically true but disingenuous and facetious.
While it’s true that Linux is just the kernel, what most people refer to as Linux is actually the Operating System GNU/Linux, or, as RMS would now call it, GNU plus Linux, or sometimes, a less GNU depended, but mostly GNU/Linux compatible OS, or, as I have literally just now come to call it */Linux.
Moreover, a modern */Linux system is expected to be based on SystemD, unless explicitly avoiding it due to some technical constraint or some desired feature of another init system. One could come to call this SystemD/Linux.
And lastly, this kind of use case would be the perfect match for a Wayland shell, as opposed to an X11 shell. Which would be more efficient, and would give the shell more freedom in the management of windows.
As a result, when asking about a Linux phone, we could expect one is talking about a phone running a SystemD+Wayland/Linux OS, or at least a mobile-focused */Linux OS.
The Android kernel is a, largely downstream, fork of the Linux kernel, but the Android OS is in almost no way compatible with any */Linux OS, and it’s instead its own completely different OS.
What episode is that?
The server in question is a raspberry with 4 gigabytes of ram, so I will need to use containers very sparingly. Basically I’m using podman quadlets only for those services that really only comes in containers (which for now means only codimd, overleaf, and zigbee2mqtt), and I’m running everything else on metal. But even with containers, I would still need to manage container configurations, network, firewall, file sharing permissions, etc. just like I did without containers.


I think that someone already tried (and failed) to make a wrist band thingy in the past, so they probably can’t patent it. That is, unless they went out of their way to patent the sensor technology itself, or the UX, instead of the concept of a wristband thingy
Well, I don’t use most of their stuff because I mostly run self hosted stuff that either don’t need their proxy stuff or violate their content policies (you can’t serve movies/video over their proxy, which is reasonable). But if I wanted to I already have all of that at my disposal, without any extra money.


I haven’t read the article yet, but I just wanted to say a couple of things.
First of all, I keep noticing people around me with bulky glasses that look like they came out of the DEVO peek a Boo video, and all I can think is that if I where Facebook I would use my power to influence fashion towards bulky glasses and make my glasses look sleek by comparison.
Second, it sucks that the wrist band thing is being tied with bullshit ai glasses. I would love to see that as a regular input device for PCs and smartphones.
I am using my domain. Best 10€ ever spent (maybe after Terraria). For just 10€ I get a .org domain name and all the DNS records I want, and I get pampered by cloudflare all the time…
“Oh, you want a distributed reverse proxy? You want a dislocated cache? You won’t TLS without getting a certificate? Block AI on the proxy? Even more stuff? Well guess what, we already make a bajillion dollars from big tech, so you the little guy can have all of that included in your 10€”


It’s an analogy, it must be similar in principle, not in numbers. A subscription to chatgpt also costs less than what gamblers spend in slots. But whatever, I don’t care enough to argue much more


I specifically said “full sized”, a pc with modern gpu and more than 32gb of vram is not a regular computer that most gamers have access to. If you are running a 7B model on a gtx 1080 or even an rtx 3060, you are not running a full LLM like the ones you would get from a subscription service


If you count only the cost for you, maybe it doesn’t consume water, but your toy still guzzled lakes as it was training. Plus, the hardware to run a full sized LLM is expensive, so you bragging about how it costs nothing is like a millionaire preaching to gamblers that it’s better to just be rich than try to win at the slots
That’s like… It’s purpose. Compilers always have a frontend and a backend. Even when the compiler is entirely made from scratch (like Java or go), it is split between front and backend, that’s just how they are made.
So it makes sense to invest in just a few highly advanced backends (llvm, gcc, msvc) and then just build frontends for those. Most projects choose llvm because, unlike the others, it was purpose built to be a common ground, but it’s not a rule. For example, there is an in-developement rust frontend for GCC.
Pissing is a scam to make you drink more, let it go


It’s digital audio, not analog, I doubt the GPU could ever be an issue. I suspect either there was a bug in the interaction between pipewire and the drivers (on one side or the other), or an update changed the default sampling rate of pipewire
ISO/OSI is a neatly separated model mostly used on theory.
In practice, actual network stacks are often modeled after a simpler model that is called TCP/IP. Which despite the name is not actually TCP specific.
Here’s the general description and correspondence to ISO/OSI:
Or, you can just not care about how the actual software stack is separated, and continue to use the most complete model, knowing that everyone will understand what you when you say “layer 2/3/4” anyway.
Plus, some could say that the TCP/IP model is equally unfit because the Linux network subsystem doesn’t care about layers.
Edit: I hope the formatting of that table isn’t broken on your client, because it is on mine