• EatATaco
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    12
    ·
    1 month ago

    They have to be either trained with new data or their internal structure has to be improved. It’s an offline process, meaning they don’t learn through chat sessions we have with them (if you open a new session it will have forgotten what you told it in a previous session), and they can’t learn through any kind of self-directed research process like a human can.

    Most human training is done through the guidance of another, additionally, most of this is training is done through an automated process where some computer is just churning through data. And while you are correct that the context does not exist from one session to the next, you can in fact teach it something and it will maintain it during the session. It’s just like moving to a new session is like talking to completely different person, and you’re basically arguing “well, I explained this one thing to another human, and this human doesn’t know it. . .so how can you claim it’s thinking?” And just imagine the disaster that would happen if you would just allow it to be trained by anyone on the web. It would be spitting out memes, racism, and right wing propaganda within days. lol

    They don’t think or understand in any way, full stop.

    I just gave you an example where this appears to be untrue. There is something that looks like understanding going on. Maybe it’s not, I’m not claiming to know, but I have not seen a convincing argument as to why. Saying “full stop” instead of an actual argument as to why just indicates to me that you are really saying “stop thinking.” And I apologize but that’s not how I roll.

    • insaan@leftopia.org
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      1 month ago

      Most human training is done through the guidance of another

      Let’s take a step back and not talk about training at all, but about spontaneous learning. A baby learns about the world around it by experiencing things with its senses. They learn a language, for example, simply by hearing it and making connections - getting corrected when they’re wrong, yes, but they are not trained in language until they’ve already learned to speak it. And once they are taught how to read, they can then explore the world through signs, books, the internet, etc. in a way that is often self-directed. More than that, humans are learning at every moment as they interact with the world around them and with the written word.

      An LLM is a static model created through exposure to lots and lots of text. It is trained and then used. To add to the model requires an offline training process, which produces a new version of the model that can then be interacted with.

      you can in fact teach it something and it will maintain it during the session

      It’s still not learning anything. LLMs have what’s known as a context window that is used to augment the model for a given session. It’s still just text that is used as part of the response process.

      They don’t think or understand in any way, full stop.

      I just gave you an example where this appears to be untrue. There is something that looks like understanding going on.

      You seem to have ignored the preceding sentence: “LLMs are sophisticated word generators.” This is the crux of the matter. They simply do not think, much less understand. They are simply taking the text of your prompts (and the text from the context window) and generating more text that is likely to be relevant. Sentences are generated word-by-word using complex math (heavy on linear algebra and probability) where the generation of each new word takes into account everything that came before it, including the previous words in the sentence it’s a part of. There is no thinking or understanding whatsoever.

      This is why Voroxpete@sh.itjust.works said in the original post to this thread, “They hallucinate all answers. Some of those answers will happen to be right.” LLMs have no way of knowing if any of the text they generate is accurate for the simple fact that they don’t know anything at all. They have no capacity for knowledge, understanding, thought, or reasoning. Their models are simply complex networks of words that are able to generate more words, usually in a way that is useful to us. But often, as the hallucination problem shows, in ways that are completely useless and even harmful.

      • EatATaco
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        7
        ·
        1 month ago

        An LLM is a static model created through exposure to lots and lots of text. It is trained and then used. To add to the model requires an offline training process, which produces a new version of the model that can then be interacted with.

        But this is a deliberate decision, not an inherent limitation. The model could get feedback from the outside world, in fact this is how it’s trained (well, data is fed back into the model to update it). Of course we are limiting it to words, rather than a whole slew of inputs that a human gets. But keep in mind we have things like music and image generation AI as well. So it’s not like it can’t be also be trained on these things. Again, deliberate decision rather than inherent limitation.

        We both even agree it’s true that it can learn from interacting with the world, you just insist that because it isn’t persisting, that doesn’t actually count. But it does persist, just not the the new inputs from users. And this is done deliberately to protect the models from what would inevitably happen. That being said, it’s also been fed arguably more input than a human would get in their whole life, just condescended into a much smaller period of time. So if it’s “total input” then the AI is going to win, hands down.

        You seem to have ignored the preceding sentence: “LLMs are sophisticated word generators.”

        I’m not ignoring this. I understand that it’s the whole argument, it gets repeated around here enough. Just saying it doesn’t make it true, however. It may be true, again I’m not sure, but simply stating and saying “full stop” doesn’t amount to a convincing argument.

        They simply do not think, much less understand.

        It’s not as open and shut as you wish it to be. If anyone is ignoring anything here, it’s you ignoring the fact that it went from basically just, as you said, randomly stacking objects it was told to stack stably, to actually doing so in a way that could work and describing why you would do it that way. Additionally there is another case where they asked chat gpt4 to draw a unicorn using an obscure programming language. And you know what? It did it. It was rudimentary, but it was clearly a unicorn. This is something that wasn’t trained on images at all. They even messed with the code, turning the unicorn around, removing the horn, fed it back in, and then asked it to replace the horn, and it put it back on correctly. It seemed to understand not only what an unicorn looked like, but what was the horn and where it should go when it was removed.

        So to say it just can “generate more words” is something you can accuse us of as well, or possibly even just overly reductive of what it’s capable of even now.

        But often, as the hallucination problem shows, in ways that are completely useless and even harmful.

        There are all kinds of problems with human memory, where we imagine things all of the time. You’ve ever taken acid? If so, you would see how unreliable our brains are at always interpreting reality. And you want to really trip? Eye witness testimony is basically garbage. I exaggerate a bit, but there are so many flaws with it with people remembering things that didn’t happen, and it’s so easy to create false memories, that it’s not as convincing as it should be. Hell, it can even be harmful by convicting an innocent person.

        Every short coming you’ve used to claim AI isn’t real thinking is something shared with us. It might just be inherent to intelligence to be wrong sometimes.

        • feedum_sneedson@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 month ago

          It’s exciting either way. Maybe it’s equivalent to a certain lobe of the brain, and we’re judging it for not being integrated with all the other parts.