• forrgott@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 days ago

    Well, in practice, no.

    Do you think any corporation is going to bother making a separate model for government contracts versus any other use? I mean, why would they. So unless you can pony up enough cash to compete with a lucrative government contract (and the fact none of us can is, on fact, the while point), the end result will involve these requirements being adopted by the overwhelming majority of generative AI available on the market.

    So in reality, no, this absolutely will not be limited to models purchased by the feds. Frankly, I believe choosing to think otherwise to be dangerously naive.

    • MrMcGasion@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      Based on the attempts we’ve seen at censoring AI output so far, there doesn’t seem to me to be a way to actually do this without building a new model with pre-censored training data.

      Sure they can tune models, but even “MechaHitler” Grok was still giving some “woke” answers on occasion. I don’t see how this doesn’t either destroy AI’s “usefulness” (not that there’s any usefulness there to begin with) or cost so much to implement that investors pull out because none of the AI companies are profitable, and throwing billions more to sift through and filter the training data pushes profitability even further away (if censoring all the training data is even possible at all).

    • itsame@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 day ago

      No. You would use a base model (GPT-4o) to get a reliable language model to which you would add a set of rules that the chat bot follows. Every company has its own rules, it is already widely in use to add data like company-specific manuals and support documents. Not rocketscience at all.

      • forrgott@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 day ago

        So many examples of this method failing I don’t even know where to start. Most visible, of course, was how that approach failed to stop Grok from “being woke” for like, a year or more.

        Frankly, you sound like you’re talking straight out of your ass.

        • itsame@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Sure, it can go wrong, it is not fool-proof. Just like building a new model can cause unwanted surprises.

          BTW. There are many theories about Grok’s unethical behavior but this one is new to me. The reasons I was familiar with are: unfiltered training data, no ethical output restrictions, programming errors or incorrect system maintenance, strategic errors (Elon!), publishing before proper testing.