Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • eronth@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    7
    ·
    2 months ago

    Or at least they can’t reason the way we do about our physical world.

    • zalgotext@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      4
      ·
      2 months ago

      No, they cannot reason, by any definition of the word. LLMs are statistics-based autocomplete tools. They don’t understand what they generate, they’re just really good at guessing how words should be strung together based on complicated statistics.

        • zalgotext@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          I can be convinced by contrary evidence if provided. There is no evidence of reasoning in the example you linked. All that proved was that if you prime an LLM with sufficient context, it’s better at generating output, which is honestly just more support for calling them statistical auto-complete tools. Try asking it those same questions without feeding it your rules first, and I bet it doesn’t generate the right answers. Try asking it those questions 100 times after feeding it the rules, I bet it’ll generate the wrong answers a few times.

          If LLMs are truly capable of reasoning, it shouldn’t need your 16 very specific rules on “arithmetic with extra steps” to get your very carefully worded questions correct. Your questions shouldn’t need to be carefully worded. They shouldn’t get tripped up by trivial “trick questions” like the original one in the post, or any of the dozens of other questions like it that LLMs have proven incapable of answering on their own. The fact that all of those things do happen supports my claim that they do not reason, or think, or understand - they simply generate output based on their input and internal statistical calculations.

          LLMs are like the Wizard of Oz. From afar, they look like these powerful, all-knowing things. The speak confidently and convincingly, and are sometimes even correct! But once you get up close and peek behind the curtain, you realize that it’s just some complicated math, clever programming, and a bunch of pirated books back there.

            • zalgotext@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              2 months ago

              It needed the rules, and it needed carefully worded questions that matched the parameters set by the rules. I bet if the questions’ wording didn’t match your rules so exactly, it would generate worse answers. Heck, I bet if you gave it the rules, then asked several completely unrelated questions, then asked it your carefully worded rules-based questions, it would perform worse, because it’s context window would be muddied. Because that’s what it’s generating responses based on - the contents of it’s context window, coupled with stats-based word generation.

              I still maintain that it shouldn’t need the rules if it’s truly reasoning though. LLMs train on a massive set of data, surely the information required to reason out the answers to your container questions is in there. Surely if it can reason, it should be able to generate answers to simple logical puzzles without someone putting most of the pieces together for them first.

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      2 months ago

      You’re failing into the same trap. When the letters on the screen tell you something, it’s not necessarily the truth. When there is “I’m reasoning” written in a chatbot window, it doesn’t mean that there is a something that’s reasoning.