Definition of can dish it but can’t take it

  • P03 Locke@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    open-weights aren’t open-source.

    This always has been a dumb argument, and really lacks any modicum of practicality. This is rejecting 95% of the need because it is not 100% to your liking.

    As we’ve seen in the text-to-image/video world, you can train on top of base models just fine. Or create LoRAs for specialization. Or change them into various styles of quantized GGUFs.

    Also, you don’t need a Brazilian LLM because all of the LLMs are very multilingual.

    Spending $3000 on training is still really cheap, but depending on the size of the model, you can still get away with training on 24GB or 32GB cards, which cost you the price of the card and energy. LoRAs take almost nothing to train. A university that is worth anything is going to have the resources to train a model like that. None of these arguments hold water.