Definition of can dish it but can’t take it

  • lacaio@mander.xyz
    link
    fedilink
    arrow-up
    1
    ·
    14 hours ago

    Training LLMs is very costly, and open-weights aren’t open-source. For example, there are some LLMs in Brazil, but there is a notable case for a brazilian student on the University of Dusseldorf that banded together with two other students of non-brazilian origin to make a brazilian LLM. 4B model. They used Google to train the LLM, I think, because any training on low VRAM won’t work. It took many days and over $3000 dollars. The name is Tucano.

    I know it looks cheap because there are many, but many country initiatives are eager on AI technology. It’s costly.

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      open-weights aren’t open-source.

      This always has been a dumb argument, and really lacks any modicum of practicality. This is rejecting 95% of the need because it is not 100% to your liking.

      As we’ve seen in the text-to-image/video world, you can train on top of base models just fine. Or create LoRAs for specialization. Or change them into various styles of quantized GGUFs.

      Also, you don’t need a Brazilian LLM because all of the LLMs are very multilingual.

      Spending $3000 on training is still really cheap, but depending on the size of the model, you can still get away with training on 24GB or 32GB cards, which cost you the price of the card and energy. LoRAs take almost nothing to train. A university that is worth anything is going to have the resources to train a model like that. None of these arguments hold water.