Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.

  • Spzi
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    LLMs generate texts. They don’t use language.

    How can we tell? How can we tell that we use and understand language? How would that be different from an arbitrarily sophisticated text generator?

    In the case of current LLMs, we can tell.

    At this point in the conversation, I was not asking for more statements about AIs. Instead, I was interested in statements about human usage of language, or a comparison between the two.

    LLMs read so much, and learned so little.

    ChatGPT did understand my question as intended, using the context provided.

    See, I don’t argue LLMs are super intelligent and deeply understand the meaning of words, and can use language like a master poet. Instead, I’m questioning if our own, human, ability to do so is actually as superior as we might like to believe.

    I don’t even mean we often err, which we obviously do, myself included. The question is: Is our understanding and usage of language anything else but lots and lots of algorithms stacked on each other? Is there a principal, qualitative difference between us and LLMs? If there is, “how can we tell”?