• PM_ME_YOUR_ZOD_RUNES@sh.itjust.works
    link
    fedilink
    arrow-up
    15
    arrow-down
    5
    ·
    edit-2
    10 months ago

    I disagree that it has limited uses and I do believe it is a big step towards science fiction level AI. I use it almost every day. It’s great for so many things, cooking, spelling/grammar, coding, brainstorming and information to name a few.

    I’m pretty tech savvy but know nothing about coding. Using ChatGPT I was able to create VBA code for work that will save me and my team 100’s of hours per year. It took a lot of time, patience and troubleshooting but I managed to get something that suits our needs exactly and functions as I want it to. I would of never done this otherwise. ChatGPT made it possible.

    I will admit that it has limitations and can be quite stupid. It won’t do everything and you have to help it along sometimes. But at the end of the day, it is a powerful tool once you learn how to use it.

    • ImFresh3x@sh.itjust.works
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      10 months ago

      How do you use it for cooking? I can’t imagine it’s better than having an actual recipe written by someone you trust.

      And for grammar I find grammarly to be way betters

      • PM_ME_YOUR_ZOD_RUNES@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        Because you can ask it questions about the recipe it gives. It also gets straight to the point, unlike pretty much every online recipe.

        But for the most part I don’t really follow recipes, so I rarely use it for that. It’s mostly questions about cooking techniques, timings and advice.

    • Brocken40@sh.itjust.works
      link
      fedilink
      arrow-up
      12
      arrow-down
      6
      ·
      10 months ago

      It’s not really a step towards Sci fi level ai, it’s just a slightly more advanced version of clicking on the first autopredicted word when you type a sentence on your cell phone. the tools you needed already existed and were stolen are spit out by a very fancy text prediction algorithm.

      • BitSound@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        2
        ·
        10 months ago

        I’d disagree, and go so far as to say that it’s a baby AGI, and we need new terms to talk about the future of these approaches.

        To start, “fancy autocomplete” is correct but useless, in the same way that saying the human brain is just a bunch of meat or the like. Assume that we built an autocomplete so good at its job that it knew every move you were about to make and every word you were about to speak. Yes, it’s “just a fancy autocomplete”, but one that must be backed by at least human-level intelligence. At some level of autocomplete ability, there must be a model backing it that can be called “intelligent”, even if that intelligence looks nothing like human intelligence.

        Similarly, the “fancy autocomplete” that is GPT-4 must have some amount of intelligence, and this intelligence is a baby AGI. When AGI is invoked, people tend to get really excited, but that’s what the “baby” qualifier is for. GPT-4 is good at a large variety of tasks without extra training, and this is undeniable. You can quibble about what good means in this context, but it is able to handle simple tasks from “write some code” to “what are the key points in this document?” to “tell me a bedtime story” without being specifically trained to handle those tasks. That was unthinkable a year ago, and is clearly a sign of a model that has been able to generalize across many different tasks. Hence, AGI. It’s not very good at a lot of those tasks (but surprisingly good at a lot of them), but it knows what the task is, and is trying its best. Hence, baby AGI.

        Yeah, it’s got a lot of limitations right now. But hardware is only getting cheaper, and we’re developing techniques like Chain of Thought prompting that lets the LLMs have short-term working memory, which helps immensely. A linguist I know once said that the approaches we’re taking are like building a ladder to the moon. Well, we’ve started building a hell of a ladder, and I’m excited to see where it takes us.

        • Brocken40@sh.itjust.works
          link
          fedilink
          arrow-up
          8
          arrow-down
          3
          ·
          10 months ago

          I don’t care what yall call it, ai, agi, Stacy, it doesn’t change the fact it was 100% trained on books tagged as “bed time stories” to tell you a bed time story, it couldn’t tell you one otherwise.

          Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.

          Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.

          https://en.m.wikipedia.org/wiki/Chinese_room

          • BitSound@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            10 months ago

            Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.

            But why? Also, “has free will” is exactly equivalent to “i cannot predict the behavior of this object”. This is a whole separate essay, but “free will” is relative to an observer. Nobody thinks a rock has free will. Some people think cats have free will. Lots of people think humans have free will. This is exactly in line with how hard it is to predict the behavior of each. You don’t have free will to an omniscient observer, but that observer must have above human-level intelligence. If that observer happens to have been constructed out of silicon, it doesn’t really make a difference.

            Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.

            But it can. It uses its prior experience to produce novel output, much like humans do. Hell, I’d say most humans wouldn’t pass your test for intelligence, and in fact they’re just 3 LLMs in a trenchcoat.

            https://en.m.wikipedia.org/wiki/Chinese_room

            Yeah, the reality is that we’ve built a Chinese room. And saying “well, it doesn’t really understand” isn’t sufficient anymore. In a few years are you going to be saying “we’re not really being oppressed by our robot overlords!”?

            • Brocken40@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              10 months ago

              I’m saying if there is anyone, including an omnipotent observer, that can predict a humans actions perfectly that is proof that freewill doesn’t exist at all.