The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.

  • funkforager@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    162
    arrow-down
    1
    ·
    9 months ago

    Remember when open ai was a nonprofit first and foremost, and we were supposed to trust they would make AI for good and not evil? Feels like it was only Thanksgiving…

    • Dave@lemmy.nz
      link
      fedilink
      English
      arrow-up
      72
      ·
      9 months ago

      I mean, there was all that drama where the board formed to prevent this from happening kicked out the CEO trying to do this stuff, then the board got booted out and replaced with a new board and brought back that CEO guy. So this was pretty much going to happen.

      • hoshikarakitaridia@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        41
        ·
        9 months ago

        And some people pointed it out even back then. There were signs that the employees were very loyal to Altmann, but Altmann didn’t meet the security concerns of the board. So stuff like this was just a matter of time.

        • deweydecibel@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          2
          ·
          9 months ago

          People pointed this out as a point in Altmann’s favor, too. “All the employees support him and want him back, he can’t be a bad guy!”

          Well, ya know what, I’m usually the last person to ever talk shit about the workers, but in this case, I feel like this isn’t a good thing. I sincerely doubt the employees of that company that backed Altmann had taken any of the ethics of the tool they’re creating into account. They’re all career minded, they helped develop a tool that is going to make them a lot of money, and I guarantee the culture around that place is futurist as fuck. Altmann’s removal put their future at risk. Of course they wanted him back.

          And frankly I don’t think you can spend years of your life building something like ChatGBT without having drunk the Koolaid yourself.

          The truth is OpenAI, as a body, set out to make a deeply destructive tool, and the incentives are far, far too strong and numerous. Capitalism is corrosive to ethics; it has to be in enforced by a neutral regulatory body.

          • SuckMyWang@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            9 months ago

            The engineers are likely seeing this from an arms race point of view. Possibly something like the development of an a-bomb where it’s a race against nations and these people at the leading edge can see things we cannot. While money and capitalistic factors are at play, foreseeing your own possible destruction or demise by not being ahead of the game compared to china may be a motivating factor too.

      • Sasha@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        23
        ·
        9 months ago

        Effective altruism is just capitalism camoflauge, it’s also just really bad at being camoflauge

    • NounsAndWords@lemmy.world
      link
      fedilink
      English
      arrow-up
      35
      ·
      9 months ago

      I remember when they pretended to be that. The fact that the board got replaced when it tried to exert its own power proves it was a facade from the beginning. All the PR benefits of “taking safety seriously” with none of those pesky “safety vs profitability” concerns.

    • Moira_Mayhem@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      8
      ·
      9 months ago

      It seems to be a trend that any service that claims not to be evil is just waiting for the right moment to drop that pretense.

    • guacupado@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      9 months ago

      I stopped having faith in nonprofits after seeing how much the successful ones pay their CEOs. They’re just businesses riding the low-tax train until they’re rich enough to not care anymore.

      • Hamartiogonic@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        9 months ago

        “In 1882 I was in Vienna, where I met an American whom I had known in the States. He said: ‘Hang your chemistry and electricity! If you want to make a pile of money, invent something that will enable these Europeans to cut each others’ throats with greater facility.'”

        Hiram Maxim

        I wonder if something similar happened with openAI.

        Forgot about NFTs and marketing. Invent something that will enable these Europeans to cut each others’ throats more efficiently.

    • wooki@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      9 months ago

      I wouldnt be too worried they’ve just made an over glorified word predictor and blender of peoples art

          • pinkdrunkenelephants@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            And that totally justifies having a robot that does it so efficiently it allows people to deepfake shit that’s hard to invalidate, robbing people of their ability to discern what is reality and what is not

            • wooki@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              9 months ago

              Again not new stop grandstanding it as a new effect. Media outlets have been doing this since the dawn of journalism. Scientific process created to combat it, political standards to help reduce it fand laws to make it financially unattractive act remains its not new.

              The only thing that is new. The financial gain from the hype of abusing the word AI and thr media not calling it out. But hey here we are back at the start. Its not new.

  • assassinatedbyCIA@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    2
    ·
    9 months ago

    Capitalism gotta capital. AI has the potential to be revolutionary for humanity, but because of the way the world works it’s going to end up being a nightmare. There is no future under capitalism.

  • SGG@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    3
    ·
    9 months ago

    War, huh, yeah

    What is it good for?

    Massive quarterly profits, uhh

    War, huh, yeah

    What is it good for?

    Massive quarterly profits

    Say it again, y’all

    War, huh (good God)

    What is it good for?

    Massive quarterly profits, listen to me, oh

  • Everythingispenguins@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    9 months ago

    Anonymous user: I have an army on the Smolensk Upland and I need to get it to the low counties. Create the best route to march them.

    Chat GPT:… Putin is that you again?

    Anonymous user: эн

  • ArmokGoB@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    9 months ago

    Finally, I can have it generate a picture of a flamethrower without it lecturing me like I’m a child making finger guns at school.

  • Alto@kbin.social
    link
    fedilink
    arrow-up
    9
    arrow-down
    2
    ·
    9 months ago

    So while this is obviously bad, did any of you actually think for a moment that this was stopping anything? If the military wants to use ChatGPT, they’re going to find a way whether or not OpenAI likes it. In their minds they may as well get paid for it.

    • NounsAndWords@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      9 months ago

      You mean the military with access to a massive trove of illegal surveillance (aka training data), and billions of dollars in dark money to spend, that is always on the bleeding edge of technological advancement?

      That military? Yeah, they’ve definitely been in on this one for a while.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    This is the best summary I could come up with:


    OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.

    “We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix said in an email to The Intercept.

    Suchman and Myers West both pointed to OpenAI’s close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the company’s software tools.

    The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs.

    While some within U.S. military leadership have expressed concern about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools.

    Last year, Kimberly Sablon, the Pentagon’s principal director for trusted AI and autonomy, told a conference in Hawaii that “[t]here’s a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department.”


    The original article contains 1,196 words, the summary contains 254 words. Saved 79%. I’m a bot and I’m open source!

  • annehathway12@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    5 months ago

    It’s interesting to note OpenAI’s decision regarding the ban on using ChatGPT for “Military and Warfare” applications. For more updates and insights on AI developments, visit ChatGPT.