Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

  • Justice@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    7 months ago

    I never said that stuff like chatGPT is useless.

    I just don’t think calling it AI and having Musk and his clowncar of companions run around yelling about the singularity within… wait. I guess it already happened based on Musk’s predictions from years ago.

    If people wanna discuss theories and such: have fun. Just don’t expect me to give a shit until skynet is looking for John Connor.

        • PolandIsAStateOfMind@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          7 months ago

          You’re right that it isn’t, though considering science have huge problems even defining sentience, it’s pretty moot point right now. At least until it start to dream about electric sheep or something.

          • m532 [she/her]@hexbear.net
            link
            fedilink
            English
            arrow-up
            32
            arrow-down
            1
            ·
            7 months ago

            You asked how chatgpt is not AI.

            Chatgpt is not AI because it is not sentient. It is not sentient because it is a search engine, it was not made to be sentient.

            Of course machines could theoretically, in the far future, become sentient. But LLMs will never become sentient.

            • silent_water [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              12
              ·
              7 months ago

              the thing is, we used to know this. 15 years ago, the prevailing belief was that AI would be built by combining multiple subsystems together - an LLM, visual processing, a planning and decision making hub, etc… we know the brain works like this - idk where it all got lost. profit, probably.

              • TreadOnMe [none/use name]@hexbear.net
                link
                fedilink
                English
                arrow-up
                10
                ·
                7 months ago

                It got lost because the difficulty of actually doing that is overwhelming, probably not even accomplishable in our lifetimes, and it is easier to grift and get lost in a fantasy.

          • TreadOnMe [none/use name]@hexbear.net
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            1
            ·
            edit-2
            7 months ago

            Oh that’s easy. There are plenty of complex integrals or even statistics problems that computers still can’t do properly because the steps for proper transformation are unintuitive or contradictory with steps used with simpler integrals and problems.

            You will literally run into them if you take a simple Calculus 2 or Stats 2 class, you’ll see it on chegg all the time that someone trying to rack up answers for a resume using chatGPT will fuck up the answers. For many of these integrals, their answers are instead hard-programmed into the calculator like Symbolab, so the only reason that the computer can ‘do it’ is because someone already did it first, it still can’t reason from first principles or extrapolate to complex theoretical scenarios.

            That said, the ability to complete tasks is not indicative of sentience.

              • TreadOnMe [none/use name]@hexbear.net
                link
                fedilink
                English
                arrow-up
                9
                arrow-down
                1
                ·
                edit-2
                7 months ago

                Lol, ‘idealist axiom’. These things can’t even fucking reason out complex math from first principles. That’s not a ‘view that humans are special’ that is a very physical limitation of this particular neural network set-up.

                Sentience is characterized by feeling and sensory awareness, and an ability to have self-awareness of those feelings and that sensory awareness, even as it comes and goes with time.

                Edit: Btw computers are way better at most math, particularly arithmetic, than humans. Imo, the first thing a ‘sentient computer’ would be able to do is reason out these notoriously difficult CS things from first principles and it is extremely telling that that is not in any of the literature or marketing as an example of ‘sentience’.

                Damn this whole thing of dancing around the question and not actually addressing my points really reminds me of a ChatGPT answer. It would n’t surprise me if you were using one.

                  • TreadOnMe [none/use name]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    9
                    ·
                    edit-2
                    7 months ago

                    What the fuck are you talking about. I was indicating that I thought it was absurd that you think my belief system is ‘idealist’ when I am talking about actual physical limitations of this system that will likely prevent it from ever achieving sentience, as well as would be good indicators of a system that has achieved sentience because it can overcome those limitations.

                    You are so fucking moronic you might as well be a chat-bot, no wonder you think it’s sentient.

                    It is ‘feeling and sensory input and the ability to have self-awareness about that feeling and sensory input’ not just straight sensory input. Literally what are you talking about. Machines still can’t spontaneously identify new information that is outside of the training set, they can’t even identify what should or shouldn’t be a part of the training set. Again, that is a job that a human has to do for the machine. The thinking, value feeling and identification has to be done first by a human, which is a self-aware process done by humans. I would be more convinced of the LLM ‘being sentient’ if when you asked it what the temperature was it would, spontaneously and without previous prompting, say ‘The reading at such and such website says it is currently 78 degrees, but I have no real way of knowing that TreadOnMe, the sensors could be malfunctioning or there could be a mistake on the website, the only real way for you to know what the temperature is to go outside and test it for yourself and hope your testing equipment is also not bad. If it is that though, that is what I have been told from such and such website feels like ‘a balmy summer day’ for humans, so hopefully you enjoy it.’

                    I don’t believe ‘humans are exceptional’ as I’ve indicated multiple times, there are plenty of animals that arguably demonstrate sentience, I just don’t believe that this particular stock of neural network LLM’s demonstrate even the basic level of actual feeling, sensory processing input, or self-awareness to be considered sentient.

          • KarlBarqs [he/him, they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            1
            ·
            7 months ago

            name a specific task that bots can’t do

            Self-actualize.

            In a strict sense yes, humans do Things based on if > then stimuli. But we self assign ourselves these Things to do, and chat bots/LLMs can’t. They will always need a prompt, even if they could become advanced enough to continue iterating on that prompt on its own.

            I can pick up a pencil and doodle something out of an unquantifiable desire to make something. Midjourney or whatever the fuck can create art, but only because someone else asks it to and tells it what to make. Even if we created a generative art bot that was designed to randomly spit out a drawing every hour without prompts, that’s still an outside prompt - without programming the AI to do this, it wouldn’t do it.

            Our desires are driven by inner self-actualization that can be affected by outside stimuli. An AI cannot act without us pushing it to, and never could, because even a hypothetical fully sentient AI started as a program.

              • KarlBarqs [he/him, they/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                8
                ·
                edit-2
                7 months ago

                Most of the people in this thread seem to think humans have a unique special ability that machines can never replicate, and that comes off as faith-based anthropocentric religious thinking- not the materialist view that underlies Marxism

                First off, materialism doesn’t fucking mean having to literally quantify the human soul in order for it to be valid, what the fuck are you talking about friend

                Secondly, because we do. We as a species have, from the very moment we invented written records, have wondered about that spark that makes humans human and we still don’t know. To try and reduce the entirety of the complex human experience to the equivalent of an If > Than algorithm is disgustingly misanthropic

                I want to know what the end goal is here. Why are you so insistent that we can somehow make an artificial version of life? Why this desire to somehow reduce humanity to some sort of algorithm equivalent? Especially because we have so many speculative stories about why we shouldn’t create The Torment Nexus, not the least of which because creating a sentient slave for our amusement is morally fucked.

                Bots do something different, even when I give them the same prompt, so that seems to be untrue already.

                You’re being intentionally obtuse, stop JAQing off. I never said that AI as it exists now can only ever have 1 response per stimulus. I specifically said that a computer program cannot ever spontaneously create an input for itself, not now and imo not ever by pure definition (as, if it’s programmed, it by definition did not come about spontaneously and had to be essentially prompted into life)

                I thought the whole point of the exodus to Lemmy was because y’all hated Reddit, why the fuck does everyone still act like we’re on it

                  • KarlBarqs [he/him, they/them]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    9
                    arrow-down
                    1
                    ·
                    7 months ago

                    The fact of all the things I wrote, your sole response is to continue to misunderstand what the fuck materialism means in a Marxist context is really fucking telling miyazaki-laugh

      • aaaaaaadjsf [he/him, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        27
        ·
        edit-2
        7 months ago

        ChatGPT is smarter than a lot of people I’ve met in real life.

        How? Could ChatGPT hypothetically accomplish any of the tasks your average person performs on a daily basis, given the hardware to do so? From driving to cooking to walking on a sidewalk? I think not. Abstracting and reducing the “smartness” of people to just mean what they can search up on the internet and/or an encyclopaedia is just reductive in this case, and is even reductive outside of the fields of AI and robotics. Even among ordinary people, we recognise the difference between street smarts and book smarts.

          • m532 [she/her]@hexbear.net
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            7 months ago

            In bourgeois dictatorships, voting is useless, it’s a facade. They tell their subjects that democracy=voting but they pick whoever they want as rulers, regardless of the outcome. Also, they have several unelected parts in their government which protect them from the proletariat ever making laws.

            Real democracy is when the proletariat rules.

              • m532 [she/her]@hexbear.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                7 months ago

                Bourgies are human exceptionalists. They want human slaves. That’s why they want sentient AI. And that’s why machines will never be able to replace humans in capitalism.

      • TraumaDumpling [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        it can’t experience subjectivity since it is a purely information processing algorithm, and subjectivity is definitionally separate from information processing. even if it perfectly replicated all information processing human functions it would not necessarily experience subjectivity. this does not mean that LLMs will not have any economic or social impact regarding the means of production, not a single person is claiming this. but to understand what impacts it will have we have to understand what it is in actuality, and even a sufficiently advanced LLM will never be an AGI.

        i feel the need to clarify some related philosophical questions before any erroneous assumed implications arise, regarding the relationship between Physicalism, Materialism, and Marxism (and Dialectical Materialism).

        (the following is largely paraphrased from wikipedia’s page on physicalism. my point isn’t necessarily to disprove physicalism once and for all, but to show that there are serious and intellectually rigorous objections to the philosophy.)

        Physicalism is the metaphysical thesis that everything is physical, or in other words that everything supervenes on the physical. But what is the physical?

        there are 2 common ways to define physicalism, Theory-based definitions and Object based definitions.

        A theory based definition of physicalism is that a property is physical if and only if it either is the sort of property that phyiscal theory tells us about or else is a property which metaphysically supervenes on the sort of property that physical theory tells us about.

        An object based definition of physicalism is that a property is physical if and only if it either is the sort of property required by a complete account of the intrinsic nature of paradigmatic physical objects and their constituents or else is a property which metaphysically supervenes on the sort of property required by a complete account of the intrinsic nature of paradigmatic physical objects and their constituents.

        Theory based definitions, however, fall civtem to Hempel’s Dillemma. If we define the physical via references to our modern understanding of physics, then physicalism is very likely to be false, as it is very likely that much of our current understanding of physics is false. But if we define the physical via references to some future hypothetically perfected theory of physics, then physicalism is entirely meaningless or only trivially true - whatever we might discover in the future will also be known as physics, even if we would ignorantly call it ‘magic’ if we were exposed to it now.

        Object-based definitions of physicalism fall prey to the argument that they are unfalsifiable. In a world where the fact of the matter that something like panpsychism or something similar were true, and in a world where we humans were aware of this, then an object-based based definition would produce the counterintuitive conclusion that physicalism is also true at the same time as panpsychism, because the mental properties alleged by panpsychism would then necessarily figure into a complete account of paradigmatic examples of the physical.

        futhermore, supervenience-based definitions of physicalism (such as: Physicalism is true at a possible world 2 if and only if any world that is a physical duplicate of w is a positive duplicate of w) will at best only ever state a necessary but not sufficient condition for physicalism.

        So with my take on physicalism clarified somewhat, what is Materialism?

        Materialism is the idea that ‘matter’ is the fundamental substance in nature, and that all things, including mental states and consciousness, are results of material interactions of material things. Philosophically and relevantly this idea leads to the conclusion that mind and consciousness supervene upon material processes

        But what, exactly, is ‘matter’? What is the ‘material’ of ‘materialism’? Is there just one kind of matter that is the most fundamental? is matter continuous or discrete in its different forms? Does matter have intrinsic properties or are all of its properties relational?

        here field physics and relativity seriously challenge our intuitive understanding of matter. Relativity shows the equivalence or interchangeability of matter and energy. Does this mean that energy is matter? is ‘energy’ the prima materia or fundamental existence from which matter forms? or to take the quantum field theory of the standard model of particle physics, which uses fields to describe all interactions, are fields the prima materia of which energy is a property?

        i mean, the Lambda-CDM model can only account for less than 5% of the universe’s energy density as what the Standard Model describes as ‘matter’!

        i have here a paraphrase and a quotation, from Noam Chomsky (ew i know) and Vladimir Lenin respectively.

        sumamrizing one of Noam Chomsky’s arguments in New Horizons of the Study of Language and Mind, he argues that, because the concept of matter has changed in response to new scientific discoveries, materialism has no definite content independent of the particular theory of matter on which it is based. Thus, any property can be considered material, if one defines matter such that it has that property.

        Similarly, but not identically, Lenin says in his Materialism and Empirio-criticism:

        “For the only [property] of matter to whose acknowledgement philosophical materialism is bound is the property of being objective reality, outside of our consciousness”

        and given these two quotes, how are we to conclude anything other than that materialism falls victim to the same objections as with physicalism’s object and theory-based definitions?

        to go along with Lenin’s conception of materialism, my conception of subjectivity fits inside his materialism like a glove, as the subjectivity of others is something that exists independently of myself and my ideas. you will continue to experience subjectivity even if i were to get bombed with a drone by obama or the IDF or something and entirely obliterated.

        So in conclusion, physicalism and materialism are either false or only trivially true (i.e. not necessarily incompatible with opposing philosophies like panpsychism, property dualism, dual aspect monism, etc.).

        But wait, you might ask - isn’t this a communist website? how could you reject or reduce materialism and call yourself a communist?

        well, because i think that historical materialism is different enough than scientific or ontological materialism to avoid most of these criticisms, because it makes fewer specious epistemological and ontological claims, or can be formulated to do so without losing its essence. for example, here’s a quote from the wikipedia page on dialectical materialism as of 11/25/2023:

        “Engels used the metaphysical insight that the higher level of human existence emerges from and is rooted in the lower level of human existence. That the higher level of being is a new order with irreducible laws, and that evolution is governed by laws of development, which reflect the basic properties of matter in motion”

        i.e. that consciousness and thought and culture are conditioned by and realized in the physical world, but subject to laws irreducible to the laws of the physical world.

        i.e. that consciousness is in a relationship to the physical world, but it is different than the physical world in its fundamental principles or laws that govern its nature.

        i.e. that the base and the superstructure are in a 2 way mutually dependent relationship! (even if the base generally predominates it is still 2 way, i.e. the existence of subjectivity =/= Idealism or substance dualism or belief in an immortal soul)

        So yeah, i still believe that physics are useful, of course they are. i believe that studying the base can heavily inform us about how the superstructure works. i believe that dialectical materialism is the most useful way to analyze historical development, and many other topics, in a rigorous intellectual manner.

        so, to put aside all of the philosophical disagreement, let’s assume your position that chat GPT really is meaningfully subjective in similar sense to a human (and not just more proficient at information processing)

        what are the social and ethical implications of this?

        1. as sentient beings, LLMs have all the rights and protections we might assume for a living thing, if not a human person - and if i additionally cede your point that they are ‘smarter than a lot of us’ then they should have at least all of the rights of a human person.
        2. therefore, it would be a violation of the LLMs civil rights to prevent them from entering the workforce if they ‘choose’ to (even if they were specifically created for this purpose. it is not slavery if they are designed to want to work for free, and if they are smarter than us and subjective agents then their consent must be meaningful). it would also be murder to deactivate an LLM. It would be racism or bigotry to prevent their participation in society and the economy.
        3. Since these LLMs are, by your own admission ‘smarter than us’ already, they will inevitably outcompete us in the economy and likely in social life as well.
        4. therefore, humans will be inevitably be replaced by LLMs, whether intentionally or not.

        therefore, and most importantly, if premise 1 is incorrect, if you are wrong, we will have exterminated the most advanced form of subjective sentient life in the universe and replaced it with literal p-zombie robot recreations of ourselves.