• Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    66
    arrow-down
    4
    ·
    edit-2
    3 months ago

    LLMs are just very complex and intricate mirrors of ourselves because they use our past ramblings to pull from for the best responses to a prompt. They only feel like they are intelligent because we can’t see the inner workings like the IF/THEN statements of ELIZA, and yet many people still were convinced that was talking to them. Humans are wired to anthropomorphize, often to a fault.

    I say that while also believing we may yet develop actual AGI of some sort, which will probably use LLMs as a database to pull from. And what is concerning is that even though LLMs are not “thinking” themselves, how we’ve dived head first ignoring the dangers of misuse and many flaws they have is telling on how we’ll ignore avoiding problems in AI development, such as the misalignment problem that is basically been shelved by AI companies replaced by profits and being first.

    HAL from 2001/2010 was a great lesson - it’s not the AI…the humans were the monsters all along.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      44
      arrow-down
      3
      ·
      3 months ago

      I wouldn’t be surprised if someday when we’ve fully figured out how our own brains work we go “oh, is that all? I guess we just seem a lot more complicated than we actually are.”

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        18
        ·
        3 months ago

        If anything I think the development of actual AGI will come first and give us insight on why some organic mass can do what it does. I’ve seen many AI experts say that one reason they got into the field was to try and figure out the human brain indirectly. I’ve also seen one person (I can’t recall the name) say we already have a form of rudimentary AGI existing now - corporations.

        • antonim@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          6
          ·
          3 months ago

          Something of the sort has already been claimed for language/linguistics, i.e. that LLMs can be used to understand human language production. One linguist wrote a pretty good reply to such claims, which can be summed up as “this is like inventing an airplane and using it to figure out how birds fly”. I mean, who knows, maybe that even could work, but it should be admitted that the approach appears extremely roundabout and very well might be utterly fruitless.

      • skyspydude1@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        3 months ago

        This had an interesting part in Westworld, where at one point they go to a big database of minds that have been “backed up” in a sense, and they’re fairly simple “code books” that define basically all of the behaviors of a person. The first couple seasons have some really cool ideas on how consciousness is formed, even if the later seasons kind of fell apart IMO

      • BigMikeInAustin@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        3 months ago

        True.

        That’s why consciousness is “magical,” still. If neurons ultra-basically do IF logic, how does that become consciousness?

        And the same with memory. It can seem to boil down to one memory cell reacting to a specific input. So the idea is called “the grandmother cell.” Is there just 1 cell that holds the memory of your grandmother? If that one cell gets damaged/dies, do you lose memory of your grandmother?

        And ultimately, if thinking is just IF logic, does that mean every decision and thought is predetermined and can be computed, given a big enough computer and the all the exact starting values?

        • huginn@feddit.it
          link
          fedilink
          arrow-up
          20
          arrow-down
          1
          ·
          3 months ago

          You’re implying that physical characteristics are inherently deterministic while we know they’re not.

          Your neurons are analog and noisy and sensitive to the tiny fluctuations of random atomic noise.

          Beyond that: they don’t do “if” logic, it’s more like complex combinatorial arithmetics that simultaneously modify future outputs with every input.

            • huginn@feddit.it
              link
              fedilink
              arrow-up
              9
              ·
              3 months ago

              Absolutely! It’s a common misconception about neurons that I see in programming circles all the time. Before my pivot into programming I was pre-med and a physiology TA - I’ve always been interested in neurochemistry and how the brain works.

              So I try and keep up with the latest about the brain and our understanding of it. It’s fascinating.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            2
            ·
            3 months ago

            Though I should point out that the virtual neurons in LLMs are also noisy and sensitive, and the noise they use ultimately comes from tiny fluctuations of random atomic noise too.

          • DrRatso@lemmy.ml
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            3 months ago

            Physics and more to the point, QM, appears probabilistic but wether or not it is deterministic is still up for debate. Until such a time that we develop a full understanding of QM we can not say for sure. Personally I am inclined to think we will find deterministic explanations in QM, it feels like nonsense to say that things could have happened differently. Things happen the way they happen and if you would rewind time before an event, it should resolve the same way.

            • huginn@feddit.it
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              3 months ago

              Fair - it’s not that we know it’s not: it’s that we don’t know that it is.

              Probabilistic is equally likely as deterministic - we’ve found absolutely nothing disproving probabilistic models. We’ve only found reinforcement for those models.

              It’s unintuitive to humans so of course we don’t want to believe it. It remains to be seen if it’s true.

              • DrRatso@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                3 months ago

                Its worth mentioning that certain mainstream interpretations are also concretely deterministic. For example many worlds is actually a deterministic interpretation, the multiverse is deterministic, your particular branch simply appears probabilistic. Much more deterministic is Bohmian mechanics. Copenhagen interpretation, however, maintains randomness.

                • huginn@feddit.it
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  3 months ago

                  Sure but interpretations like pilot wave have more evidence against them than for them and while multiverse is deterministic it’s only technically so. It’s effectively probabilistic in that everything happens and therefore nothing is determined strictly by current state.

        • Richard@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 months ago

          Individual cells do not encode any memory. Thinking and memory stem from the great variety and combinational complexity of synaptic interlinks between neurons. Certain “circuit” paths are reinforced over time as they are used. The computation itself (thinking, recalling) then is “just” incredibly complex statistics over millions of synapses. And the most awesome thing is that all this happens through chemical reaction chains catalysed by an enormous variety of enzymes and other proteins, and through electrostatic interactions that primarily involve sodium ions!

        • DrRatso@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          3 months ago

          Seth Anil has interesting lectures on consciousness, specifically on the predictive processing theory. Under this view the brain essentially simulates reality as a sort of prediction, this simulated model is what we, subjectively, then perceive as consciousness.

          “Every good regulator of a system must be a model of that system“. In other words consciousness might exist because to regulate our bodies and execute different actions we must have an internal model of ourselves as well as ourselves in the world.

          As for determinism - the idea of libertarian free will is not really seriously entertained by philosophy these days. The main question is if there is any inkling of free will to cling to (compatibilism), but, generally, it is more likely than not that our consciousness is deterministic.

            • DrRatso@lemmy.ml
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              3 months ago

              Its not that odd if you think about it. Everything else in this universe is deterministic. Well, quantum mechanics, as we observe it, is probabilistic, but still governed by rules and calculable, thus predictable (I also believe it is, in some sense, deterministic). For there to be free will, we need some form of “special sauce”, yet to be uncovered, that would grant us the freedom and agency to act outside of these laws.

    • Hazzard
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      3 months ago

      I don’t necessarily disagree that we may figure out AGI, and even that LLM research may help us get there, but frankly, I don’t think an LLM will actually be any part of an AGI system.

      Because fundamentally it doesn’t understand the words it’s writing. The more I play with and learn about it, the more it feels like a glorified autocomplete/autocorrect. I suspect issues like hallucination and “Waluigis” or “jailbreaks” are fundamental issues for a language model trying to complete a story, compared to an actual intelligence with a purpose.

    • frezik@midwest.social
      link
      fedilink
      arrow-up
      14
      arrow-down
      4
      ·
      3 months ago

      I find that a lot of the reasons people put up for saying “LLMs are not intelligent” are wishy-washy, vague, untestable nonsense. It’s rarely something where we can put a human and ChatGPT together in a double-blind test and have the results clearly show that one meets the definition and the other does not. Now, I don’t think we’ve actually achieved AGI, but more for general Occam’s Razor reasons than something more concrete; it seems unlikely that we’ve achieved something so remarkable while understanding it so little.

      I recently saw this video lecture by a neuroscientist, Professor Anil Seth:

      https://royalsociety.org/science-events-and-lectures/2024/03/faraday-prize-lecture/

      He argues that our language is leading us astray. Intelligence and consciousness are not the same thing, but the way we talk about them with AI tends to conflate the two. He gives examples of where our consciousness leads us astray, such as seeing faces in clouds. Our consciousness seems to really like pulling faces out of false patterns. Hallucinations would be the times when the error correcting mechanisms of our consciousness go completely wrong. You don’t only see faces in random objects, but also start seeing unicorns and rainbows on everything.

      So when you say that people were convinced that ELIZA was an actual psychologist who understood their problems, that might be another example of our own consciousness giving the wrong impression.

      • vcmj@programming.dev
        link
        fedilink
        arrow-up
        6
        ·
        3 months ago

        Personally my threshold for intelligence versus consciousness is determinism(not in the physics sense… That’s a whole other kettle of fish). Id consider all “thinking things” as machines, but if a machine responds to input in always the same way, then it is non-sentient, where if it incurs an irreversible change on receiving any input that can affect it’s future responses, then it has potential for sentience. LLMs can do continuous learning for sure which may give the impression of sentience(whispers which we are longing to find and want to believe, as you say), but the actual machine you interact with is frozen, hence it is purely an artifact of sentience. I consider books and other works in the same category.

        I’m still working on this definition, again just a personal viewpoint.

          • vcmj@programming.dev
            link
            fedilink
            arrow-up
            5
            ·
            3 months ago

            I read this question a couple times, initially assuming bad faith, even considered ignoring it. The ability to change, would be my answer. I don’t know what you actually mean.

              • vcmj@programming.dev
                link
                fedilink
                arrow-up
                2
                ·
                3 months ago

                I do think we’re machines, I said so previously, I don’t think there is much more to it than physical attributes, but those attributes let us have this discussion. Remarkable in its own right, I don’t see why it needs to be more, but again, all personal opinion.

    • GregorGizeh@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      3 months ago

      It isnt so much “we" as in humanity, it is a select few very ambitious and very reckless corpos who are pushing for this, to the detriment of the rest (surprise).

      If “we” were able to reign in our capitalists we could develop the technology much more ethically and in compliance with the public good. But no, we leave the field to corpos with delusions of grandeur (does anyone remember the short spat within the openai leadership? Altman got thrown out for recklessness, investors and some employees complained, he came back and the whole more considerate and careful wing of the project got ousted).

    • MonkderDritte@feddit.de
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      3 months ago

      LLMs are just very complex and intricate mirrors of ourselves because they use our past ramblings to pull from for the best responses to a prompt. They only feel like they are intelligent because we can’t see the inner workings

      Almost like children.