• CileTheSane@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 months ago

    A better mathematical system of storing words does not mean the LLM understands any of them. It just has a model that represents the relation between words that it uses.

    If I put 10 minus 8 into my calculator I get 2. The calculator doesn’t actually understand what 2 means, or what subtracting represents, it just runs the commands that gives the appropriate output.

    • CeeBee_Eh@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 months ago

      That’s a bad analogy, because the calculator wasn’t trained using an artificial neural network literally designed by studying biological brains (aka biological neutral networks).

      And “understand” doesn’t equate to consciousness or sapience. For example, it is entirely and factually correct to state that an LLM is capable of reasoning. That’s not even up for debate. The accuracy of an LLM’s reasoning capability is one of the fundamental benchmarks used for evaluating its quality.

      But that doesn’t mean it’s “thinking” in the way most people consider.

      • CileTheSane@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        it is entirely and factually correct to state that an LLM is capable of reasoning

        Citation needed.

        If you’re going to tell me LLMs are modeled after biological brains and capable of reasoning then I call bullshit on your claims that you actually work in AI.

        Imagine you put a man in an enclosed room. There is a slot in the wall where messages get passed through written in Chinese. The man does not speak Chinese or even recognize the written language, he just thinks they’re weird symbols.
        First the man is shown examples of sequences of symbols to train him. Then he is shown incomplete sequences and asked which symbol comes next. If incorrect he is corrected, if correct he gets cookie. Eventually this man is able to carry on “conversations” with people in Chinese through continued practice.
        This man still does not speak Chinese, he is not having reasoned, rational arguments with the people he is conversing with, and if you told him it was a language he’s look at you like your crazy. “There’s no language here, just if I have these symbols and I next put the one that looks like a man wearing a hat they give me a cookie.”

        Thinking LLMs are capable of reasoning is the digital equivalent of putting eyes on a pencil then feeling bad when it gets broken in half.