• Petter1
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    This is because of semi hardcoded stuff using the mechanics of hormones that interact with the neurons in the brain, I would say. They are hardcoded by the instructions provided by the DNA, I believe.

    About the learning differences between human and LLM, there I believe that a sub-“module" of the brain functions very similar to how the LLMs work with just a way better/efficient learning algorithm that is helped by the other modules in the brain like the part that can simulate 3D space and interpret other sensory data like feeling touch, vision, smell etc

    Current LLM models are being used in static manner without ability to learn in real time, so of course it can not do anything it has not learned yet.

    It is just a theory and it can not be proven wrong since the understanding of neurons is not advanced yet.

    Well, or at least, I did not hear a good argument that proves that theory 100% wrong.

    • LANIK2000@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      You can think of the brain as a set of modules, but sensors and the ability to adhere to a predefined grammar aren’t what define AGI if you ask me. We’re missing the most important module. AGI requires cognition, the ability to acquire knowledge and understanding. Such an ability would make larger language models completely redundant as it could just learn langue or even come up with one all on its own, like kids in isolation for example.

      What I was trying to point out is that “neural networks” don’t actually learn in the way we do, using the world “learn” is a bit misleading, because it implies cognition. A neural network in the computer science sense is just a bunch of random operations in sequence. In goes a number, out goes a number. We then collect a bunch of input output pairs, the dataset, and semi randomly adjust these operations until they happen to somewhat match this collection. The reasoning is done by the humans assembling the input output pairs. That step is implicitly skipped for the AI. It doesn’t know why they belong together and it isn’t allowed to reason about why, because the second it spits out something else, that is an error and this whole process breaks. That’s why LLMs hallucinate with perfect confidence and why they’ll never gain cognition, because the second you remove the human assembling the dataset, you’re quite literally left with nothing but semi random numbers, and that’s why they degrade so fast when learning from themselves.

      This technology is very impressive and quite useful, and demonstrates how powerful of a tool language alone is, but it doesn’t get us any closer to AGI.

      • Petter1
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Do you know if current LLM models use static neural networks (like where each node is connected with each of the next layers)or if they can rearrange their connections into other layers?

          • Petter1
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            So if you want it more like a brain, you would have to have nodes that are able to form the connections while learning letting each node decide in what "direction” it wants to grow it’s connection in some sort, rather than having fixed connections where you only adjust the correlation of the nodes. And you would need multiple transformer (and most likely some hard logic algorithms as well) for different inputs as well as a main "thinker” that decides through which transformer (or algorithm) a input has to go and if the output of that transformer needs to be feeded in again as no input.

            • LANIK2000@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 months ago

              In theory. Then comes the question of how exactly are you gonna teach/train it. I feel our current approach is too strict for proper intelligence to emerge, but what do I know. I honestly have no clue how such a model could be trained. I guess it would be similar to how people train actual braincells? Tho that field is very immature atm… The neat thing about the human brain is, that it’s already preconfigured for self learning, tho it does come with its own bias on what to learn due to its unique needs and desires.

              • Petter1
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 months ago

                😁🥳 I’d say, you would then need something that takes the role of hormones in that system (like hardcoded reactions to events in and outside of the AI brain/body(so called emotions I would say)) that trigger the connections to grow, shrink, get their values adjusted etc.

                At least that would be my approach.

                • LANIK2000@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  3 months ago

                  Calling the reward system hormones, doesn’t really change the fact that we have no clue where to even start. What is a good reward for general intelligence? Solving problems? That’s our current approach, which has the issue of the AI not actually understanding the problems and just ending up remembering question answer pairs (patterns). We need to figure out what defines inteligence and “understanding” in an easily measurable way. Which is something people knew almost a hundred years ago when we came up with the idea of neural networks, and why I say we didn’t get any closer to AGI with LLMs.