This is again a big win on the red team at least for me. They developed a “fully open” 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.
AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) […]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B […].
As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other “fully open” models, coming next to open weight only models.
A step further, thank you AMD.
PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.
That’s one more than 2B so she must be really hot!
/nierjokes
AMD knew what they were doing.
That’s a real stretch. 3B is basically stating the size of the model, not the name of the model.
Are you calling her fat?
Scott Steiner is
Can’t judge you for wanting to **** her or whatever, just don’t ask her for freebies. She won’t care if you are a human at that point.