

Maybe a manual dial to cycle through the available nearby vehicles then. The idea is just that there should be a way for it to be clear who you are contacting and where their vehicle is on the road relative to yours.


Maybe a manual dial to cycle through the available nearby vehicles then. The idea is just that there should be a way for it to be clear who you are contacting and where their vehicle is on the road relative to yours.


My interpretation of Book of Job is God and Satan are in a toxic relationship where they egg each other on to fuck with people so you shouldn’t trust either of them.


I use local models, and it barely doubles the electricity use of my computer while it’s actively generating, which is a very small proportion of the time I’m doing work; the environmental impact is negligible.


I’d guess that hypothetical AI cybersecurity verification of code would be like that, where there are probably no bugs, but it’s not a totally sure thing. But even if you can’t have mathematical certainty there are no bugs, that doesn’t mean every or most programs verified this way are possible to be exploited.


We’re replacing that journey and all the learning, with a dialogue with an inconsistent idiot.
I like this about it, because it gets me to write down and organize my thoughts on what I’m trying to do and how, where otherwise I would just be writing code and trying to maintain the higher level outline of it in my head, which will usually have big gaps I don’t notice until spending way too long spinning my wheels, or otherwise fail to hold together. Sometimes a LLM will do things better than you would have, in which case you can just use that code. When it gives you code that is wrong, you don’t have to use it, you can write it yourself at that point, after having thought about what’s wrong with the AI approach and how what you requested should be done instead.


If there are actually no bugs, can’t that create a situation where it’s impossible to break it? Not to say this is actually a thing AI can achieve, but it doesn’t seem like bad logic.


Make sure you are also doing fun and nice things for yourself while sober, and not just reserving it for when you’re high. Expectations and associations make a big difference, if you’re taking a drug with the intention and belief that it will make you feel a certain way, that alone might make it work.
Except for alcohol, which will make often you feel like shit even if you expect it to make you feel good. Congrats on quitting.


That was a mod? It honestly feels like the finished version of what the game was meant to be


Maybe they were perfectly happy using these OSs before some stupid new feature was introduced
They should know, it’s only going to keep getting worse
I feel the same way but it seems like it must not be universal given the various murderers who were taking it.


There are ultra low powered LLMs but even then you’re looking at at least 2GB, and the most typical graphics card has 8, so it’s going to be some impact at least. Their intelligence/capability scales hard with memory usage too, so for most things you might want to use it for the smallest ones likely would not be good enough.
For example there’s a Rimworld mod that adds locally generated flavor text dialogue, using such a low powered model. But it’s a really simple feature that doesn’t affect actual gameplay at all. Games where the gameplay interacts with LLM output in any way are going to have higher hardware requirements, to the point where they will need to use the graphics card more for that part and less for the actual graphics; it’s enough of a bottleneck that anyone wanting to do this will basically need to design the game around it.
I use this and contributed a small bug fix to it the other year. It’s pretty functional, although one thing I wish it did is have a way to see quarantined/spam labeled messages instead of the filter just blackholing them.


I think they were implying LLMs being used in behavior/responses of game characters rather than vibe coded game programming


There should be some kind of automated certification for git repos, where if the described install process does not complete on a default install of the most popular OS, the software gets a big red “does not work” label.


Ring, etc are either slow or not responding
Nice.


What the author seems to be proposing is something like true crime media but for environmental crimes.
And if you’re tempted to turn around and say that environmental crimes don’t happen because of individuals, but because of “the system”, I hear you. Social structures, ideologies and politics have a profound impact on human behaviour. Using this term – the system – can feel like a profound contribution to a difficult discussion, underpinned by the desire not to over simplify. But exactly who, or what, is the system?
A serial killer also lives in a society, and we can blame society for any hardships they may have faced. But if on a true-crime show I were to simply cite “the system” as a motive for murder, people would want me to be more precise. We understand that choices are involved, and motives are personal, not just systemic. Otherwise, wouldn’t we all be criminals?
Seems like a cool idea.


the bigger issue is that it’s being used in a GPL3 project which kind of isn’t allowed
I followed the links and I think the original argument being referenced has been twisted around a bit game-of-telephone style, GPL prohibiting inclusion of LLM generated code isn’t what it’s claiming, it’s more that they think AI trained on GPL code violates it when it happens to reproduce it exactly:
it is readily apparent that GitHub Copilot is capable of returning, verbatim, already extant code (although it does attempt to synthesise novel code based on its training data). This immediately raises the issue, what happens when that code (such as the previous example) is licensed under a copyleft license such as the GPL or AGPL? How is the matter of copyright in this instance resolved?
https://github.com/ZDoom/gzdoom/issues/3395
https://www.fsf.org/licensing/copilot/on-the-nature-of-ai-code-copilots#5. What About Copyright?
It might also be the case that the GPL prohibits LLM generated code somehow, I don’t actually know, just want to point out that no one has made an argument for that.


Mythologized history to serve their racist worldview:
Right, ancient Greece and Rome were actually quite diverse and the concept of “whiteness” didn’t have much meaning thousands of years ago. Race, as we know it, is a fairly recent category. But the far-right relies on this construct of Western civilization, which for them means white civilization and culture. So they craft a narrative that begins with Greece and Rome and then continues into the medieval period up through the emergence of modern Europe.
Pick an idea and roll with it, most of them won’t work, but the only way to find out what will is experience since nobody is going to tell you the truth about money making methods.
No caffeine multiple days in a row. I often enjoy it, and I don’t think it’s really that bad for you, but I don’t like the way it adjusts my personality and state of mind if that makes sense and it’s easy to get addicted enough to start feeling like crap if you don’t have any.