(They/Them) I like TTRPGs, history, (audio and written) horror and the history of occultism.

  • 2 Posts
  • 60 Comments
Joined 5 months ago
cake
Cake day: January 24th, 2025

help-circle




  • There’s this app on F-Droid called WikWok. It basically presents you with random wikipedia articles in a kind of feed like with TikTok.

    If I were you, I’d download it and scroll until something that you find interesting appears and then read a bit of it. Then ask yourself a question.

    This usually gets me up and pacing, and once I’m pacing I want to fidget with stuff so I go do chores.

    May not work for you, but that’s what I got.






  • There’s a conversation that could be had about how there are no truly public platforms on the web. Ultimately, everywhere you can speak is owned by someone, and any community you build exists at their mercy. This can exert a lot of pressure on a community’s standards and beliefs, and when I started using the internet, abusing this was a major faux pas.

    However, that conversation requires a lot of nuance and patience. You are kind of transparently posting this in response to a moderator in another community removing your posts. If you’d like to complain about that, there’s actually a community specifically for that.

    By the by, free speech complaints have become strongly associated with certain political movements as dog whistles. You might want to look into that and make sure you want to present that image.



  • I used to drink an inhuman amount of caffeine. It made starting my meds kind of hard, because the caffeine started actually affecting me like it’s supposed to.

    So I was suddenly very jittery and nervous. For a bit I thought it was the medication, but then one day when I was making myself a cup of black tea I stopped and went, “…hey, wait, caffeine?”

    Weening myself off of it was brutal. I started trying to drink one tea a day, then switched to green tea and very gradually decreased the amount of caffeine. I still occasionally get cravings, but luckily I can trick my body by drinking decaf tea.

    It made me so fucking cranky, by the way, caffin withdrawal sucks.





  • Ideally or practically? Those are very different conversations.

    Practically, there’s not a lot that can be done. In the US, there’s not a good way for someone like that to continue living.

    I also will note that the phrasing of your last two sentences is kind of unpleasant. I’m not sure if that’s your intent, but it creates this implication that your value as a worker is the major contributed to your value to society. I don’t think that’s the case- I think it’s possible for someone to not work and contribute a lot to the happiness and well-being of a local community. Also, part of the thing that makes humans special is that even if someone doesn’t contribute to the overall needs of society, we will still take care of them out of love. That we love other people is a sufficient foundation for their existence.



  • I’m not sure why so many people begin this argument on solid ground and then hurl themselves off into a void of semantics and assertions without any way of verification.

    Saying, “Oh it’s not intelligent because it doesn’t have senses,” shifts your argument to proving that’s a prerequisite.

    The problem is that LLM isn’t made to do cognition. It’s not made for analysis. It’s made to generate coherent human speech. It’s an incredible tool for doing that! Simply astounding, and an excellent example of the power of how a trained model can adapt to a task.

    It’s ridiculous that we managed to get a probabilistic software tool which generates natural language responses so well that we find it difficult to distinguish them from real human ones.

    …but it’s also an illusion with regards to consciousness and comprehension. An LLM can’t understand things for the same reason your toaster can’t heat up your can of soup. It’s not for that, but it presents an excellent illusion of doing so. Companies that are making these tools benefit from the fact that we anthropomorphize things, allowing them to straight up lie about what their programs can do because it takes real work to prove they can’t.

    Average customers will engage with LLM as if it was a doing a Google search, reading the various articles and then summarizing them, even though it’s actually just completing the prompt you provided. The proper way to respond to a question is an answer, so they always will unless a hard coded limit overrides that. There will never be a way to make a LLM that won’t create fictitious answers to questions because they can’t tell the difference between truth or fantasy. It’s all just a part of their training data on how to respond to people.

    I’ve gotten LLM to invent books, authors and citations when asking them to discuss historical topics with me. That’s not a sign of awareness, it’s proof that the model is doing what it’s intended to do- which is the problem, because it is being marketed as something that could replace search engines and online research.