Not true. 500Mb models suck ass and are just here for fun. A lot of local models in the 2.5Gb range can run on a phone and produce very coherent output on par with free-to-use LLMs without actually destroying the planet (while using them I mean, training is still a nightmare).
“Fun” fact, political bias is baked in the local models too, don’t ask Qwen3 about what happened in Tiananmen Square in 1989…
Technically correct but any suggestions on how to differentiate between the different qualities of models?
Local llms have barely reached the point where output is comprehensible, in the mean time chatgpt can effectively scam elderly people.
Not true. 500Mb models suck ass and are just here for fun. A lot of local models in the 2.5Gb range can run on a phone and produce very coherent output on par with free-to-use LLMs without actually destroying the planet (while using them I mean, training is still a nightmare).
“Fun” fact, political bias is baked in the local models too, don’t ask Qwen3 about what happened in Tiananmen Square in 1989…