• 21 Posts
  • 24 Comments
Joined 10 months ago
cake
Cake day: September 29th, 2024

help-circle



  • useful or at least neutral rather than being a negative on the society?

    I would recommend reading about the Protestant work ethic

    there is an assumption that is deeply embedded in our society, including in your question, that someone’s worth as a human is linked to the job they perform, how much money they make at that job, etc.

    it is so deeply ingrained that it’s one of those “fish don’t realize they’re wet” things - you probably never had a class in school where the teacher explicitly said “today we’re going to learn about why rich people are better than poor people, and employed people are better than unemployed people”.

    if you’re looking for a concrete idea for what can be done, read about universal basic income.

    but breaking out of that “not having a job means you’re a drain on society” mindset needs to come first. if you skip that step, UBI will seem to you like a “handout” given to people who don’t “deserve” it (I would also recommend reading about the concept of “deserving poor” vs “undeserving poor”)




  • With NHS mental health waitlists at record highs, are chatbots a possible solution?

    taking Betteridge’s Law one step further - not only is the answer “no”, the fucking article itself explains why the answer is no:

    People around the world have shared their private thoughts and experiences with AI chatbots, even though they are widely acknowledged as inferior to seeking professional advice.

    as with so many other things, “maybe AI can fix it?” is being used as a catch-all for every systemic problem in society:

    In April 2024 alone, nearly 426,000 mental health referrals were made in England - a rise of 40% in five years. An estimated one million people are also waiting to access mental health services, and private therapy can be prohibitively expensive.

    fucking fund the National Health Service properly, in order to take care of the people who need it.

    but instead, they want to continue cutting its budget, and use “oh there’s an AI chatbot that you can use that is totally just as good as talking to a human, trust us” as a way of sweeping the real-world harm caused by those budget cuts under the rug.

    Nicholas has autism, anxiety, OCD, and says he has always experienced depression. He found face-to-face support dried up once he reached adulthood: “When you turn 18, it’s as if support pretty much stops, so I haven’t seen an actual human therapist in years.”

    He tried to take his own life last autumn, and since then he says he has been on a NHS waitlist.




  • tl;dw is that you should say “please” as basically prompt engineering, I guess?

    the theory seems to be that the chatbot will try to match your tone, so if you ask it questions in a tone like it’s an all-knowing benevolent information god, it’ll respond in kind, and if you treat it politely its responses will tend more towards politeness?

    I don’t see how this solves any of the fundamental problems with asking a fancy random number generator for authoritative information, but sure, if you want to be polite to the GPUs, have at it.

    like, several lawyers have been sanctioned for submitting LLM-generated legal briefs with hallucinated case citations. if you tack on “pretty please, don’t make up any fake case citations or I could get disbarred” to a prompt…is that going to solve the problem?


  • short answer: no, not really

    long answer, here’s an analogy that might help:

    you go to https://yourbank.com and log in with your username and password. you click the button to go to Online Bill Pay, and tell it to send ACME Plumbing $150 because they just fixed a leak under your sink.

    when you press “Send”, your browser does something like send a POST request to https://yourbank.com/send-bill-payment with a JSON blob like {"account_id": 1234567890, "recipient": "ACME Plumbing", "amount": 150.0} (this is heavily oversimplified, no actual online bank would work like this, but it’s close enough for the analogy)

    and all that happens over TLS. which means it’s “secure”. but security is not an absolute, things can only be secure with a particular threat model in mind. in the case of TLS, it means that if you were doing this at a coffee shop with an open wifi connection, no one else on the coffeeshop’s wifi would be able to eavesdrop and learn your password.

    (if your threat model is instead “someone at the coffeeshop looking over your shoulder while you type in your password”, no amount of TLS will save you from that)

    but with the type of vulnerability Jellyfin has, someone else can simply send their own POST request to https://yourbank.com/send-bill-payment with {"account_id": 1234567890, "recipient": "Bob's Shady Plumbing", "amount": 10000.0}. and your bank account will process that as you sending $10k to Bob’s Shady Plumbing.

    that request is also over TLS, but that doesn’t matter, because that’s security for a different level of the stack. the vulnerability is that you are logged in as account 1234567890, so you should be allowed to send those bill payment requests. random people who aren’t logged in as you should not be able to send bill payments on behalf of account 1234567890.



  • oh, this one’s pretty easy, actually

    a normal AI tells you it’s safe to eat one rock per day

    an AI agent waits for you to open your mouth, and then throws a rock at your face. but it’s smart enough to only do that once a day.

    Casey Newton reviewed OpenAI’s “agent” back in January

    he called it “promising but frustrating”…but this is the type of shit he considers “promising”:

    My most frustrating experience with Operator was my first one: trying to order groceries. “Help me buy groceries on Instacart,” I said, expecting it to ask me some basic questions. Where do I live? What store do I usually buy groceries from? What kinds of groceries do I want?

    It didn’t ask me any of that. Instead, Operator opened Instacart in the browser tab and begin searching for milk in grocery stores located in Des Moines, Iowa.

    At that point, I told Operator to buy groceries from my local grocery store in San Francisco. Operator then tried to enter my local grocery store’s address as my delivery address.

    After a surreal exchange in which I tried to explain how to use a computer to a computer, Operator asked for help. “It seems the location is still set to Des Moines, and I wasn’t able to access the store,” it told me. “Do you have any specific suggestions or preferences for setting the location to San Francisco to find the store?”

    they’re gonna revolutionize the world, it’s gonna evolve into AGI Real Soon Now…but also if you live in San Francisco and tell it to buy you groceries it’ll order them from Iowa.





  • definitely good news, although there’s a terrifying aspect of it.

    from the article, about Kim Davis’s attorney:

    Staver previously told the Lantern that his team’s goal is for the appeal to reach the U.S. Supreme Court and that, should the appeals panel rule against him, he would appeal to the higher court.

    The case would then provide the justices an opportunity to re-evaluate Obergefell v. Hodges, the 2015 decision that guaranteed same-sex couples marriage rights, on the same grounds that the court in 2022 used to overturn the federal right to abortion, Staver said.

    “This case underscores why the U.S. Supreme Court should overturn Obergefell v. Hodges, because that decision threatens the religious liberty of many Americans who believe that marriage is a sacred institution between one man and one woman. The First Amendment precludes making the choice between your faith and your livelihood.”

    SCOTUS can’t just randomly issue a press release that says “oh btw Obergefell v. Hodges is overturned”. they need a case to be teed up for them in order to do that. with the Dobbs case that overturned Roe v Wade for example, the Supreme Court decision came down in 2022, but it was regarding a Mississippi law that was passed in 2018. that law was a 15-week abortion ban, which clearly violated Roe. the Mississippi legislature had zero reason to pass it other than to provide a case that could work its way up to the Supreme Court and give them an excuse to ban abortion.

    Staver is the founder of a group of shitbags who call themselves the “Liberty Counsel”. the writing is on the wall that the Christofascists are gunning for marriage equality, and this case is one of several that give them a possible avenue with which to do it.







  • putting nukes into space is quite unlikely, even taking into account the current clusterfuck of the US government.

    it’s been thoroughly studied since the 1950s, for obvious reasons. the practical considerations put it somewhere between “not feasible” and “gigantic pain in the ass”.

    nuclear weapons need maintenance and upkeep, which the US military is already not terribly good at. a large part of this is that during the Cold War, maintaining nukes was seen as an important job within the military. in the past few decades, if you want career advancement in the military, you’d want to go to Iraq or Afghanistan for actual combat. working with nukes has become somewhat of a dead-end, career-wise.

    satellites in LEO have a finite lifespan - the tiny bits of atmospheric drag mean they need to spend a bit of fuel to maintain altitude. after the fuel runs out they’re de-orbited, usually into the south Pacific (one of the most believable theories about the purpose of the X-37 space plane is refueling CIA spy satellites). doing that with nukes would be extremely expensive, as well as environmentally catastrophic (though of course the current government would only really care about the former)

    and on top of all that…the US simply doesn’t need nukes in space. there is the “nuclear triad” of land-based ICBMs, nuclear-armed bombers, and nuclear-armed submarines. that was established during the Cold War to ensure the US had the ability to strike back at Russia, even if Russia devastated the US with a first strike.

    the more realistic scenario in my mind is Kessler syndrome - a satellite-on-satellite collision creates debris, and that debris takes quite a while to fall out of orbit. in the meantime, it can create a chain reaction by colliding with other satellites. space is big, but LEO is much more crowded than it used to be, particularly with Starlink satellites, and those are cheaply manufactured and don’t always have reliable thrusters to allow them to move out of the way of any debris.

    In the first half of 2024, satellites belonging to SpaceX’s Starlink fleet performed almost 50,000 collision-avoidance manoeuvres.

    if it did happen, Kessler syndrome wouldn’t have much of an immediate impact, but instead a longer, slower-burning one. launches of new satellites into LEO would become less frequent due to the increased risk, and higher orbits (GPS and geosynchronous satellites) would be more risky as well because they would need to pass through the debris cloud. so existing satellites would continue to work, but as they aged out and needed replacement, those replacements would be less likely to happen.