Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • aloofPenguin@piefed.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    3
    ·
    edit-2
    5 hours ago

    I tried this with a local model on my phone (qwen 2.5 was the only thing that would run, and it gave me this confusing output (not really a definite answer…):
    JqCAI6rs6AQYacC.jpg

    it just flip flopped a lot.

    E: also, looking at the response now, the numbers for the car part doesn’t make any sense

    • AbidanYre@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 hours ago

      I like that it’s twice as far to drive for some reason. Maybe it’s getting added to the distance you already walked?

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      I notice that the “internal thinking” of Opus 4.6 is doing more flip-flopping than earlier modelss like Sonnet 4.5, and it’s coming out with correct answers in the end more often.