• simple
    link
    fedilink
    English
    arrow-up
    27
    ·
    1 year ago

    Definitely AI generated art. If you asked me 3-4 years ago I’d tell you there is no way neural networks would understand how to generate a nice looking image. The fact that high quality AI art is way less expensive computationally than generating text still confuses me. Like, I can generate beautiful 1024x1024 images within seconds but this text generation model needs 20gb of vram? Huh?!

  • Lukecis@lemmy.fmhy.ml
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    1 year ago

    Ai near perfectly replicating voices and creating fantastic looking art within the span of it being a joke to very impressive in like 1~2 years.

    Also Ai chat models being able to hold a decent conversation and not be gibberish or obviously inhuman is very impressive.

    • Duamerthrax@lemmy.fmhy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Most humans only know how to repeat short, smart sounding talking points. It makes sense that AI have been able to replicate that. Ai chat bots sound like they’re saying a lot, but really say anything. Political speech writers will be out of a job first.

      • intensely_human
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Someone once posted that they got GPT to produce much higher quality output by instructing it to make an argument, critique that argument, make a new argument with the critique in mind, and so on.

        Maybe stable, precise reasoning is just a result of multiple language models arguing with each other.

        • G234323@lemmy.fmhy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          yeah probably asking for more detail helps. GPT is limited in output length, so arguing and going to the detail will produce better output

    • Martineski@lemmy.fmhy.mlM
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      6
      ·
      edit-2
      1 year ago

      Ai is intelligent though. And wtf is “real” intelligence? Are you saying that the technology we have is not “real”?

      Edit: are you confusing intelligence for sentience? Because there’s a huge difference between the two.

      • Dodecahedron December@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        I am not sure if you want to claim you love Jesus or not these practices are constitutional under the First Amendment.

        Was that intelligence, what i just wrote there? Because I just tapped the left-most option on my spell checker. This is how ChatGPT and other LLMs work. They know a lot of words and know what word should come next. They do it a lot better than the android spell checker, but they do it just the same.

        The hallucinations AI experiences are more of its users hallucinating that AI is Actually Intelligent. It is only Artificially Intelligent.

        • Martineski@lemmy.fmhy.mlM
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          They know a lot of words and know what word should come next.

          And to accurately predict intelligent conversations you need to develop intelligence yourself. (I mean the ai)

            • Martineski@lemmy.fmhy.mlM
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 year ago

              Because this is the actual definition:

              The ability to acquire, understand, and use knowledge.

              • Dodecahedron December@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                1 year ago

                The key here is that LLMs do not understand anything other than langauge. They are great at sounding like they know, but that’s different than knowing.

              • tendiemaster69@lemmy.fmhy.ml
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                1 year ago

                Trying to define intelligence is like trying to explain the color blue to someone blind. I’m not trying to define intelligence, I’m answering the question “And wtf is “real” intelligence?”

                Also, your given definition doesn’t describe what intelligence is beyond the most simplest explanation.

                • Martineski@lemmy.fmhy.mlM
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 year ago

                  You don’t even know what you’re talking about. I think that you’re talking about ai’s world model which GPT4 was already proven to have.

  • Martineski@lemmy.fmhy.mlM
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    1 year ago

    The sudden explosion in ai popularity when chatGPT came out. I didn’t expect to see this topic going mainstream for meny many more years before that happened.

  • Spzi
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    Two things, related to LLMs:

    1. How reasonable and usable it is or seems when all it does is predicting the next word.
    2. How good it is at coding and explaining programming concepts.
    • intensely_human
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      It makes me wonder if the aspect of human intelligence we consider the general part, is anything other than text prediction.

    • burgundymyr@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      AIs coding/scripting is incredible, but that’s likely just because there is so much shared scripting available to munch on. AI is just faster at doing the Google search I do to steal someone else’s work.

      • moozogew@lemmy.fmhy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        They’re getting so good at understanding what I want, there are times I’ve spent hours googling obscure search terms trying to work out the library I need but a brief description of what I need into one of the LLMs and it’ll just tell me

  • HonkyChicken@lemmy.fmhy.ml
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    The fact that it’s already being commercialiased in such a massive way. That it’s to the point where it’s everyday use is sitting firmly in the public sphere and so is being used by millions of people around the world, not just in a specialised way, but to visualise all sorts of goofy things. I find that amazing. Regular people know what ChatGPT is. Perhaps not Midjourney and the hundred other products, but the fact that they know about one of them already is evidence, to me at least, of just how fast this things moves.

  • Lenguador@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    DALL-E was the first development which shocked me. AlphaGo was very impressive on a technical level, and much earlier than anticipated, but it didn’t feel different.
    GANs existed, but they never seemed to have the creativity, nor understanding of prompts, which was demonstrated by DALL-E. Of all things, the image of an avocado-themed chair is still baked into my mind. I remember being gobsmacked by the imagery, and when I’d recovered from that, just how “simple” the step from what we had before to DALL-E was.
    The other thing which surprised me was the step from image diffusion models to 3D and video. We certainly haven’t gotten anywhere near the quality in those domains yet, but they felt so far from the image domain that we’d need some major revolution in the way we approached the problem. The thing which surprised me the most was just how fast the transition from images to video happened.

  • intensely_human
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    There’s a subreddit called AIgreentext.

    The night I discovered it, I spent about half an hour reading them. Each one was the best one I’d ever read. They had me laughing so hard that I had to put my phone down for fear of a heart attack. Like I was seriously worried about sudden death I was laughing so hard.

    I’ve never laughed like that in my life. I didn’t know such laughed was possible. I’m not exaggerating: I stopped because I was scared of literally dying of laughter.

  • SkyNTP@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    Exponential curve on a logarithmic axis? Now I’ve seen it all.

  • burgundymyr@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    The simultaneous nuance and beauty of the art coupled with inability to get hands right. It’s both humorous and chilling.

    • n00b001@lemmy.fmhy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      It is.

      An ai is such a “different” mind. It may seem like us, it might tick some boxes that we do too, but (like the hands example) there might just be one small thing it does very differently. In the hands example, it doesn’t have much consequence - but as AI has more control over daily life (insurance, route planning, healthcare, political foreign policy, lawmaking) some of these “it’s not human” hand-problems might have far reaching implications for society