• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    22 hours ago

    Again, you should read the ruling. The judge explicitly addresses this. The Authors claim that this is how LLMs work, and the judge says “okay, let’s assume that their claim is true.”

    Fourth, each fully trained LLM itself retained “compressed” copies of the works it had trained upon, or so Authors contend and this order takes for granted.

    Even on that basis he still finds that it’s not violating copyright to train an LLM.

    And I don’t think the Authors’ claim would hold up if challenged, for that matter. Anthropic chose not to challenge it because it didn’t make a difference to their case, but in actuality an LLM doesn’t store the training data verbatim within itself. It’s physically impossible to compress text that much.