cross-posted from: https://lemmy.ml/post/43810526

Actions by the president and the Pentagon appeared to drive a wedge between Washington and the tech industry, whose leaders and workers spoke out for the start-up.

Feb. 27, 2026

https://archive.ph/hwHbe

Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons.”

More than 100 employees at Google signed a petition calling on the tech giant to “refuse to comply” with the Pentagon on some uses of artificial intelligence in military operations.

And employees at Amazon, Google and Microsoft urged their leaders in a separate open letter on Thursday to “hold the line” against the Pentagon.

Silicon Valley has rallied behind the A.I. start-up Anthropic, which has been embroiled in a dispute with President Trump and the Pentagon over how its technology may be used for military purposes. Dario Amodei, Anthropic’s chief executive, has said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”

  • Rekall Incorporated@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 小时前

    has said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”

    This is how we know this is all PR bullshit (perhaps crafted ad-hoc, but still essentially a propaganda operation).

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 小时前

    These companies are signaling their virtues for PR purposes, but it won’t change much. There are still permissively licensed open weight models and nothing is stopping governments from training their own specialized models. Given the surveillance the NSA already is known to have, for example, there clearly is no shortage of technologists who are willing to work on shitty things. The NSA and other 3-letter agencies are likely already using LLMs for surveillance and there are likely already LLM powered killing machines. Human-piloted drones have already been committing war crimes with impunity for quite some time, so not sure much LLMs will fundamentally change the situation.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    2
    ·
    19 小时前

    This was the red line for the techbros? This was a bridge to far? Don’t get me wrong, it’s good that they didn’t fold on this point… but fuck, would have been nice if they had taken exception to any of the thousands of red lines the regime has crossed up until now.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      12 小时前

      They’re all invested in each other, a threat to one is a threat to all and up until now the regime hasn’t threatened their investments.

      Seriously there’s a graph somewhere showing who’s invested in what and basically it’s all just one thing now. I don’t know why they maintain the charade of being separate companies.

      And the reason they don’t want their technology being used to kill people is because they don’t trust the administration to keep it to foreign countries in the middle East where no one cares what happens. They’ll use it in the United States and everyone will know who’s technology is powering their drones.

      All that’s happening is that financial self-interest and ethics both give the same answer in this scenario.

  • Zwuzelmaus@feddit.org
    link
    fedilink
    English
    arrow-up
    25
    ·
    edit-2
    20 小时前

    Dario Amodei […] said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”

    This is absolutely reasonable and I support this position.

    Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons.”

    But I don’t trust this guy who shows regularly that he wants to be the ruler of the whole world by means of his own AI.

    • partofthevoice@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 小时前

      This stuff is really scary when you think about it. If we keep getting closer to a reality where technology can silently monitor your every thought, with analysis and automation becoming evermore efficient, what’s bound to happen so long as the only thing stopping it from being used against us is moral standing? Eventually, someone somewhere can make something so trivially that it tips the scales in their favor so long as they lack the moral standing to not do so. Technology is a unique kind of threat, given especially the glorification that’s often given to its innovation. Skepticism could have been applied earlier.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    17 小时前

    Yeah… Microsoft and Google have a list of employees to fire now.

    Trump will back off to some extent, to avoid inflaming stock markets (and his Big Tech friends heavily invested in Anthropic).

    And Anthropic will fire a few people and make money somehow.

    That’s about it.

  • inari@piefed.zip
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    19 小时前

    This is like Alien vs Predator, whoever wins, we all lose

  • Casterial@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    18 小时前

    Trump wants to use Grok for all things government, but isn’t Grok one of the most biased and poorly performative AIs?