Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.

  • Spzi
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I guess you’re right, but find this a very interesting point nevertheless.

    How can we tell? How can we tell that we use and understand language? How would that be different from an arbitrarily sophisticated text generator?

    For the sake of the comparison, we should talk about the presumed intelligence of other people, not our (“my”) own.

    • Utsob Roy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      In the case of current LLMs, we can tell. These LLMs are not black boxes to us. It is hard to follow the threads of their decisions because these decisions are just some hodgepodge of statistics and randomness, not because they are very intricate thoughts.

      We can’t compare the outputs, probably, but compute the learning though. Imagine a human with all the literature, ethics, history, and all kind of texts consumed like that LLMs, no amount of trick questions would have tricked him to believe in racial cleansing or any such disconcerting ideas. LLMs read so much, and learned so little.

      • Spzi
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        LLMs generate texts. They don’t use language.

        How can we tell? How can we tell that we use and understand language? How would that be different from an arbitrarily sophisticated text generator?

        In the case of current LLMs, we can tell.

        At this point in the conversation, I was not asking for more statements about AIs. Instead, I was interested in statements about human usage of language, or a comparison between the two.

        LLMs read so much, and learned so little.

        ChatGPT did understand my question as intended, using the context provided.

        See, I don’t argue LLMs are super intelligent and deeply understand the meaning of words, and can use language like a master poet. Instead, I’m questioning if our own, human, ability to do so is actually as superior as we might like to believe.

        I don’t even mean we often err, which we obviously do, myself included. The question is: Is our understanding and usage of language anything else but lots and lots of algorithms stacked on each other? Is there a principal, qualitative difference between us and LLMs? If there is, “how can we tell”?