- cross-posted to:
- technology@lemmy.ml
- technology@beehaw.org
- cross-posted to:
- technology@lemmy.ml
- technology@beehaw.org
A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.
The Paper (No “Zero-Shot” Without Exponential Data): https://arxiv.org/abs/2404.04125
there’s a lot to optimize in LLMs and i never said otherwise. Though, photonic computers if the field would be researched, could consume as much as an LED lamp making it even more effective than our brain. given the total amount of computers in the world, even the slightest power consumption optimization would save colossal amount of energy, and in case of photonics the raw numbers could possibly be unimagineable.
I bet they simply will find a way to greatly simplify the mathematical apparatus of the neuron interaction. Matrix multiplication is kinda slow and there’s lots of it