Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
You shouldnt have to. If you ask a person that question theyll respond “what good is walking to the car wash, dumbass,” if AI can’t figure that out its trash
A person would look at you like you are an idiot if you asked this question.
The AI tool I asked said walking saves money, gets excersise etc.
Asked about the car and it said the car is at the car wash, otherwise why would you ask how to get there?
Missing the point. Any person would know walking to the car wash isn’t reasonable. You shouldn’t have to craft a perfectly tailored prompt for AI to realize that. If you think this is a gatcha, then whoah boy, I’ve got a bridge to well ya!
You are missing the point. Any reasonable person would wonder why you asking a stupid question.
Which is why when asked, the AI said of course the car is there, you. Must be asking either a trick question or for another reason.
It could be that. or it could be that the AI gives the illusion of reasoning and this is an example of the illusion breaking. But no it was probably that it knew it was a trick question and decided to answer wrongly because it is very very smart. Yeah.