Yeah I agree. Small models is the way. You can also use LoRa/QLoRa adapters to “fine tune” the same big model for specific tasks and swap the use case in realtime. This is what apple do with apple intelligence. You can outperform a big general LLM with an SLM if you have a nice specific use case and some data (which you can synthesise in come cases)
Yeah I agree. Small models is the way. You can also use LoRa/QLoRa adapters to “fine tune” the same big model for specific tasks and swap the use case in realtime. This is what apple do with apple intelligence. You can outperform a big general LLM with an SLM if you have a nice specific use case and some data (which you can synthesise in come cases)