• 1 Post
  • 44 Comments
Joined 9 months ago
cake
Cake day: January 1st, 2024

help-circle

  • So you consider the side effects an acceptable risk?

    Doctors that are specialized in that field should know that better than you or me, no?

    But I’ll humor you anyway. You know what also has side effects? Going through puberty. And those side effects are permanent. If your puberty changes you in ways that don’t align with your gender identity, those side effects include higher risk of dying by suicide, as one example. So yeah, that seems like a risk that I, with my unqualified opinion, would be willing to take in order to make sure my child and their doctors have enough time to figure out who they are and what they need.




  • The algorithm is actually tailored to find out if/when you fall asleep while watching videos, and then recommends longer videos in autoplay when it believes you are, because they’ll get to play you more ads and cash out more.

    You might be misremembering / misinterpreting a little there. This behavior is not intentional, it’s just a side effect of how the algorithm currently works. Showing you longer videos doesn’t equate to showing you more ads. On the contrary, if you get loads of short videos you’ll have way more opportunities to see pre-roll ads, but with longer videos, you’re just to just the mid-roll spots in that video. So YouTube doesn’t really have an incentive to make it work like that, it’s just accidental.

    Here’s the spiffing Brit video on this, which I think you might have gotten this idea from: https://youtu.be/8iOjeb5DTZI

    Edit: to be clear, I fully agree that YouTube will do anything to shove ads down our throats no matter how effective they actually are. I’m just saying that this example you’ve brought is not really that.






  • It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.

    This is very misleading. An LLM doesn’t have access to its training dataset in order to “search” it. Producing convincing looking gibberish is what it always does, that’s its only mode of operation. The key is that the gibberish that comes out of today’s models is so convincing that it actually becomes broadly useful.

    That also means that no, not everything an LLM produces has to have been in its training dataset, they can absolutely output things that have never been said before. There’s even research showing that LLMs are capable of creating actual internal models of real world concepts, which suggests a deeper kind of understanding than what the “stochastic parrot” moniker wants you to believe.

    LLMs do not make decisions.

    What do you mean by “decisions”? LLMs constantly make decisions about which token comes next, that’s all they do really. And in doing so, on a higher, emergent level they can make any kind of decision that you ask them to, the only question is how good those decisions are going be, which in turn entirely depends on the training data, how good the model is, and how good your prompt is.










  • My “best we got” was in regards to the potential to become a lot worse because of shareholder pressure. Given that CD Project is a publicly traded company, GOG is much worse in that regard than Steam.

    I fully agree that GOG, as it currently is, could be the better product for you depending on your values, but its defenses against enshittification are objectively much worse than Steam’s*, and that’s all I was talking about.

    *That is, until Gabe dies, I guess, who knows what’ll happen then