• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 days ago

    I don’t know: it’s not just the outputs posing a risk, but also the tools themselves

    Yeah, that’s true. Poisoning the training corpus of models is at least a potential risk. There’s a whole field of AI security stuff out there now aimed at LLM security.