Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.

  • jacksilver@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 months ago

    Just for context, this is the error rate when the right answer is provided to the LLM in a document. This means that even when the answer is being handed to the LLM they fail at the rates provided in the article/paper.

    Most people interacting with LLMs aren’t asking questions against documents, or the answer can not be directly inferred from the documents (asking the LLM to think about the materials in the documents).

    That means in most situations the error rate for the average user will be significantly higher.

    • rekabis@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      2 months ago

      As I pointed out in another root comment, the average - depending on the model being tested - tends to sit between 60% and 80%. But this is with no restriction on source materials… the LLMs are essentially pulling from world+dog in that case

      So this opens up an interesting option for users, in that hallucinations/inaccuracies can be controlled for and potentially reduced by as much as ⅔ simply by restricting the model to those documents/resources that the user is absolutely certain contains the correct answer.

      I mean, 25% is still stupidly high. In any prior era, even 2.5% would have been an unacceptably high error rate for a business to stomach. But source-restriction seems to be a somewhat promising guardrail to use for the average user doing personal work.

      • jacksilver@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Thanks for providing the actual numbers.

        I think one of the more concerning things is, what if you think the answer is in the documents you provided but they actually aren’t. What you think is a low error rate could actually be a high error rate.

      • [deleted]@piefed.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        2 months ago

        Aka being wrong, but with a fancy name!

        When Cletus is wrong because he mixed up a dog and a cat when deacribing their behavior do we call it hallucinating? No.

        • Scipitie@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          24
          arrow-down
          2
          ·
          2 months ago

          Accepting concepts like “right” and “wrong” gives those tools way too much credit, basically following the AI narrative of the corporations behind them. They can only be used about the output but not the tool itself.

          To be precise:

          LLMs can’t be right or wrong because the way they work has no link to any reality - it’s stochastics, not evaluation. I also don’t like the term halluzination for the same reason. It’s simply a too high temperature setting jumping into a closeby but unrelated vector set.

          Why this is an important distinction: Arguing that an LLM is wrong is arguing on the ground of ChatGPT and the likes: It’s then a “oh but wen make them better!” And their marketing departments overjoy.

          To take your calculator analogy: like these tools do have floating point errors which are inherent to those tools wrong outputs are a dore part of LLMs.

          We can minimize that but then they automatically use part of their function. This limitation is way stronger on LLMs than limiting a calculator to 16 digits after the comma though…

            • Scipitie@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              9
              arrow-down
              3
              ·
              2 months ago

              That’s my problem: any single word humanizes the tool in my opinion. Iperhaps something like “stochastic debris” comes close but there’s no chance to counter the common force of pop culture, Corp speak a and humanities talent to see humanoid behavior everywhere but each other. :(

                • deranger@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  2 months ago

                  Paredolia just means seeing patterns that aren’t there, it’s not implicitly human. If you see a dog in the clouds, that’s paredolia.

            • eceforge@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              No comment on the rest of the thread but I always though “confabulation” was a more accurate word than hallucination for what LLMs tend to do.

              The “signs and symptoms” part of the article really seems oddly familiar when compared to interacting with an LLM sometimes haha.

              • dogzilla@masto.deluma.biz
                link
                fedilink
                arrow-up
                1
                ·
                2 months ago

                @eceforge @technology I never understood why we don’t just call it “lying”. I mean, I understand why AI companies don’t call it that, but that’s what it is and I don’t think we’re helping ourselves by using a euphemism

            • leftzero@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              2 months ago

              Scam. We’re being sold an autocomplete tool as a search engine.

              Or fraud, since some of the same companies destroyed the functionality of their search engines in order to make the autocomplete look better in comparison.

        • bad1080@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 months ago

          if you have a lobby you get special names, look at the pharma industry who coined the term “discontinuation syndrome” for a simple “withdrawal”

    • Zink@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      I’m no expert and don’t care to become one, but I understand they generally trained these models on the entire public internet plus all the literature and research they could pirate.

      So I would expect the outputs of those models to not be some kind of magical correct description of the world, but instead to be roughly “this passes for something a person on the internet might write.”

      It does the thing it was designed to do pretty well. But then the sociopathic grifters tried to sell it to the world as a magic super-intelligence that actually knows things. And of course many small-time wannabe grifters ate it up.

      What LLMs do is get you a passable elaborate forum post replying to your question, written by an extremely confident internet rando. But it’s done at computer speed and global scale!

  • CubitOom@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    I’m not good at math, so someone please help me.

    If a model hallucinates 1% of the time for every question in a chat window that has 100 prompts in it, what is the chance of receiving a hallucination at some point in the chat?

    • hersh@literature.cafe
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 months ago

      If I understand you correctly: 63.4% odds of having at least one hallucination.

      The simple way to calculate the odds of getting at least one error is to calculate the odds of having ZERO, and then inverting that.

      If the odds of a single instance being an error is 1%, that means you have a 99% chance of having no errors. If you repeat that 100 times, then it’s 99% of 99% of 99%…etc. In other words, 0.99^100 = 0.366. That’s the odds of getting zero errors 100 times in a row. The inverse of that is 0.634, or 63.4%.

      This is the same way to calculate the odds of N coin flips all coming up heads. It’s going to be 0.5^N. So the odds of getting 10 heads in a row is 0.5^10 = ~0.0977%, or 1:1024.

      Edit: This is assuming independence of all 100 prompts, which is not generally true in a single chat window, where each prompt follows the last and retains both the previous prompts and answers in its context. As the paper explains, error rate tends to increase with context length. You should generally start a new chat rather than continue in an existing one if the previous context is not highly relevant.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      2 months ago

      One in 100. However, that is simple a measure of probability, so do not expect that to always be true for every 100 prompts.

      For example, if you rolled a 100-sided die 100 times, it’s possible to get a one every time. In practice, it would likely be a mix. You might have a session where you get no wrong answers and times when you get several.

      The problem is that ignorant people trust these models implicitly, because they sound convincing and authoritative, and many people are not equipped to be able to vet the information being generated (also notice I didn’t say “retrieved”).

  • rekabis@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    2 months ago

    How much do large language models actually hallucinate when answering questions grounded in provided documents?

    Okay, this is looking promising, at least in terms of the most important qualifications being plainly stated in the opening line.

    Because the amount of hallucinations/inaccuracies “in the wild” - depending on the model being tested - runs about 60-80%. But then again, this would be average use on generalized data sets, not questions focusing on specific documentation. So of course the “in the wild” questions will see a higher rate.

    This also helps users, as it shows that hallucinations/inaccuracies can be reduced by as much as ⅔ by simply limiting LLMs to specific documentation that the user is certain contains the desired information, rather than letting them trawl world+dog.

    Very interesting!

    • HubertManne@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      I have been saying this for awhile. I am sorta hoping we see open source llms that are trained on a curated list of literature. its funny that these came out and it seemed like the makers did not take the long known garbage in - garbage out into account.

  • FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    7
    ·
    2 months ago

    At 32K, the best model (GLM 4.5) fabricates 1.19% of answers

    Not bad, I don’t know many people who are 98.81% accurate in their statements.

        • [deleted]@piefed.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          3
          ·
          2 months ago

          That’s right! We should be comparing computers to computers. Well, hardware computers, not people computers.

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            3
            ·
            2 months ago

            Calculators are not computers, computers contain calculator-like elements but a calculator is no more a computer than a passenger jet is a coffee shop by virtue of having a coffee pot onboard.

            Calculators cannot fabricate answers, but nor are they 100% correct due to things like bitflips and square root approximations. They also cannot write text, so the comparison would make even less sense.

            LLMs and Humans can fabricate answers in written text so comparing the fabrication rate in written text of an LLM to a human (both entities which generate their answers with neural networks) makes more sense than to compare either to a calculator which neither uses a neural network or produces text.

            So ‘we’ should compare like things and not choose items based on superficial similarities.

          • ji59@hilariouschaos.com
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            2 months ago

            What do you even mean? Calculators and LLMs are solving different problems. And there are a lot of calculators and a lot of LLMs. Also, calculator accuracy could be approaching 0% because they all have limited precision and there are infinite numbers. Some of the calculators even can’t correctly answer 0.1+0.2, while most LLMs can do that.

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 months ago

      It’s a pleasure to meet you! The only thing exceeding my level of wisdom is my modesty.

  • FrankLaskey@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    My biggest takeaway here is that choosing the context length and (to a lesser extent) the temperature carefully is important for reducing hallucinations. I expected model families to vary widely between themselves but not for context length to have such a massive impact tbh.

    It seems from this like reducing context length in applications where it isn’t essential for the model to hold very large amounts of context simultaneously would be best practice no?

    • cmhe@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      Hallucinations of LLMs are just one class of errors, and the most dangerous one.

      Other stuff like garbeled or repeating output are other errors.

  • unpossum@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    6
    ·
    2 months ago

    GLM 4.5 is from August. Isn’t the real tl;dr that a seven month old open model, which was behind proprietary models at the time, did better than most humans would?

    • MHard@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 months ago

      The task described in this article is asking questions about a document that was provided to the llm in the context.

      I would hope that if you give a human a text and ask them to cite facts from it they would do better than 99% correct.

      Also, when the tokens exceeded 200k, the llm error rate was higher than 10%

      • unpossum@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        I would hope that if you give a human a text and ask them to cite facts from it they would do better than 99% correct.

        That’s literally what school exams are about, isn’t it?

        Token window is a problem for all llms though, that’s not easily solved, but it can be worked around to a certain extent.

  • HubertManne@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    2 months ago

    This is why I would encourage people to use llms for something not important. like video games or interests. You likely will have enough knowledge around the things to catch the “hallucinations” and hopefully that will give you perspective on their use for more important things.

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 months ago

        see if they don’t at all then they can fall victim to thinking they are better than they are. By using it a bit in something unimportant which you are knowledgable enough about it allows you to see the flaws and it does not take that much time to see them.