• barsoap
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    1 year ago

    What does it mean to “know what a word means”?

    For one, ChatGPT has no idea what a cat or dog looks like. It has no understanding of their differences in character of movement. Lacking that kind of non-verbal understanding, when analysing art that’s actually in its domain, that is, poetry, it couldn’t even begin to make sense of the question “has this poem feline or canine qualities” – best it can do is recognise that there’s neither cats nor dogs in it and, being stumped, make up some utter nonsense. Maybe it has heard of catty and that dogs are loyal and will be looking for those themes, but feline and canine as in elegance? Forget it, unless it has read a large corpus of poet analysis that uses those terms: It can parrot that pattern matching, but it can’t do the pattern matching itself, it cannot transfer knowledge from one domain to another when it has no access to one of those domains.

    And that’s the tip of the iceberg. As humans we’re not really capable of purely symbolic thought so it’s practically impossible to appreciate just how limited those systems are because they’re not embodied.

    (And, yes, Stable Diffusion has some understanding of feline vs. canine as in elegance – but it’s an utter moron in other areas. It can’t even count to one).


    Then, that all said, and even more fundamentally, ChatGPT (as all other current AI algos we have) is a T2 system, not a T3 system. It comes with rules how to learn, it doesn’t come with rules enabling it to learn how to learn. As such it never thinks – it cannot think, as in “mull over”. It reacts with what passes as a gut in AI land, and never with “oh I’m not sure about this so let me mull it over”. It is in principle capable of not being sure but that doesn’t mean it can rectify the situation.

    • lloram239@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      it couldn’t even begin to make sense of the question “has this poem feline or canine qualities”

      Which is obviously false, as a quick try will show. Poems are just language and LLMs understand that very well. That LLMs don’t have any idea how cats actually look like or move, beyond what they can gather from text books, is irrelevant here, they aren’t tasked with painting a picture (which the upcoming multi-modal models can do anyway).

      Now there can of course be problems that can be expressed in language, but not solve in the realm of language. But I find those to be incredible rare, rare enough that I never really seen a good example. ChatGPT captures an enormous amount of knowledge about the world, and humans have written about a lot of stuff. Coming up with questions that would be trivial to answer for any human, but impossible for ChatGPT is quite tricky.

      And that’s the tip of the iceberg.

      Have you actually ever actually seen an iceberg or just read about them?

      It comes with rules how to learn, it doesn’t come with rules enabling it to learn how to learn

      ChatGPT doesn’t learn. It’s a completely static model that doesn’t change. All the learning happened in a separate step back when it was created, it doesn’t happen when you interact with it. That illusion comes from the text prompt, which includes both your text as well as its output, getting feed into the model as input. But outside that text prompt, it’s just static.

      “oh I’m not sure about this so let me mull it over”.

      That’s because it fundamentally can’t mull it over. It’s a feed forward neural network, meaning everything that goes in on one side comes out on the other in a fixed amount of time. It can’t do loops by itself. It has no hidden internal monologue. The only dynamic part is the prompt, which is also why its ability to problem solve improves quite a bit when you require it to do the steps individually instead of just presenting the answer, as that allows the prompt to be it’s “internal monologue”.

      • barsoap
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Coming up with questions that would be trivial to answer for any human, but impossible for ChatGPT is quite tricky.

        Which is why I came up with the “feline poetry” example. It’s a quite simple concept for a human even if not particularly poetry-inclined, yet, if noone ever has written about the concept it’s going to be an uphill battle for ChatGPT. And, no, I haven’t tried. I also didn’t mean it as some kind of dick measuring contest I simply wanted to explain what kind of thing ChatGPT really has trouble with.

        Have you actually ever actually seen an iceberg or just read about them?

        As a matter of fact yes, I have. North cape, Norway.

        ChatGPT doesn’t learn. It’s a completely static model that doesn’t change.

        ChatGPT is also its training procedure if you ask me, same as humanity is also its evolution.