A new paper from Apple's artificial intelligence scientists has found that engines based on large language models, such as those from Meta and OpenAI, still lack basic reasoning skills.
I still fail to see how people expect LLMs to reason. It’s like expecting a slice of pizza to reason. That’s just not what it does.
Although Porsche managed to make a car with the engine in the most idiotic place win literally everything on Earth, so I guess I’m leaving a little possibility that the slice of pizza will outreason GPT 4.
I still fail to see how people expect LLMs to reason. It’s like expecting a slice of pizza to reason. That’s just not what it does.
Although Porsche managed to make a car with the engine in the most idiotic place win literally everything on Earth, so I guess I’m leaving a little possibility that the slice of pizza will outreason GPT 4.
This text provides a rather good analogy between people who think that LLMs reason and people who believe in mentalists.
That’s a great article.
LLMs keep getting better at imitating humans thus for those who don’t know how the technology works, it’ll seem just like it thinks for itself.