… and neither does the author (or so I believe - I made them both up).

On the other hand, AI is definitely good at creative writing.

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    2 months ago

    The knife doesn’t insist it won’t hurt you, and you can’t get cut holding the handle. Comparatively, AI insists it is correct, and you can get false information using it as intended.

    • slacktoid@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      2 months ago

      I would argue it’s not the AI but the companies (that make the AI) making unattainable promises and misleading people.

        • slacktoid@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          Guns are literally for killing like its all they do. Even for hunting the sole purpose is to kill. That’s not the case with LLMs, its just exclusively how these companies are using it as they have all the power to dictate terms in the workplace.

          • LLMs are for murdering the entirety of human culture and experience. They cannot work without doing so; it is their entire purpose: murder human creativity and then feed its rotting, dismembered corpse back to us.

            So I say the parallel stands. Guns kill people. LLMs kill culture.

            (P.S. Target shooters seem to not be killing when using guns.)

            • howrar@lemmy.ca
              link
              fedilink
              arrow-up
              1
              ·
              2 months ago

              Is it the training process that you take issue with or the usage of the resulting model?

                • howrar@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 months ago

                  The energy usage is mainly on the training side with LLMs. Generating afterwards is fairly cheap. Maybe what you want is to have fewer companies trying to train their own models from scratch and encourage collaborating instead?

            • slacktoid@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              2 months ago

              I don’t agree with that. If you use it to destroy human creativity, sure that will be the outcome. Or you can use it to write boring ass work emails that you have to write. You could use it to automate boring tasks. Or a company can come along and automate creativity badly.

              Capitalism is what’s ruining it. Capitalism is what is ruining culture, creativity, and the human experience more than LLMs. LLMs are just a knife and instead of making tasty food we are going around stabbing people.

              and yeah people made guns just to put holes in pieces of paper, sure nothing else. If you do not know how LLMs work, just say so. There are enough that are trained on public data which do not siphon human creativity.

              It is doing a lot of harm to human culture, but that’s more of how it’s being used, and it needs real constructive criticism instead of simply being obtuse.

              • I don’t agree with that.

                Of course you don’t. You’re one of the non-creatives who thinks that “prompt engineering” makes you a creative, undoubtedly.

                But the first “L” in “LLM” says it all. The very definition of degenerative AI requires the wholesale dismemberment of human culture to work and, indeed, there’s already a problem: the LLM purveyors have hit a brick wall. They’ve run out of material to siphon away from us and are now stuck with only finding “new” ways to remix what they’ve already butchered in the hopes that we think the stench from the increasingly rotten corpse won’t be noticeable.

                LLMs are not a knife. They are a collection of knives and bone saws purpose-built to dismember culture. You can use those knives and saws to cut your steak at dinner, I guess, but they’d be clumsy and unwieldy and would produce pretty weird slices of meat on your plate. (Meat that has completely fucked-up fingers.) But this is like how you can use guns to just shoot at paper targets: it’s possible, but it’s not the purpose for which the gun was built.

                LLMs and the degenerative AI built from them will never be anything but the death of culture. Enjoy your rotting corpse writing and pictures while it lasts, though!

                • slacktoid@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  Of course you don’t. You’re one of the non-creatives who thinks that “prompt engineering” makes you a creative, undoubtedly.

                  Sure, that’s exactly what I believe … Wow I’m so called out. I use it as a tool to do boring menial tasks so that I can spend my time on more meaningful things, like spending time with my family, making some dinner, spend time on the parts of my work I enjoy and automate the boring tedious parts, like writing boilerplate code that’s slightly different based on context.

                  But the first “L” in “LLM” says it all. The very definition of degenerative AI requires the wholesale dismemberment of human culture to work and, indeed

                  Can you elaborate on how and the mechanisms by which this is happening as you see? Why do you see it that way? Do you not see any circumstances in which it could be useful? Like legitimately useful? Like have you not written a stupid tedious email to someone you didn’t like that you couldn’t be bothered to put more than 2 seconds to prompt it to some one or thing else to deal with it for you?

                  there’s already a problem: the LLM purveyors have hit a brick wall. They’ve run out of material to siphon away from us and are now stuck with only finding “new” ways to remix what they’ve already butchered in the hopes that we think the stench from the increasingly rotten corpse won’t be noticeable.

                  This is true it’s starting to eat its own tail. That also doesn’t mean all new models are using new data. It could also be using better architectures on the same data. But yes using ai generated data to train new ai is bad and you’ll end up creating nerfed less useful model that will probably hallucinate more. Doesn’t mean the tech isn’t useful cause you’ve not seen it used for anything good.

                  • Like have you not written a stupid tedious email to someone you didn’t like that you couldn’t be bothered to put more than 2 seconds to prompt it to some one or thing else to deal with it for you?

                    No, I haven’t. I call out bullshit in my job instead of acquiescing to it. I’m not sure when I last wrote an email at work at all, not to mention a stupid, tedious one.

                    If there’s a part of your job that can be done by degenerative AI, change how your job works. If your boss won’t let you change the bullshit, change your job. I’ve been doing this since I was 15. It’s not that hard.

                    Can you elaborate on how and the mechanisms by which this is happening as you see?

                    Here, this may help you grasp it.

                    Why do you see it that way?

                    Because I looked into how it works and spotted the bit where it needs a huge volume of input data. That input data is going to be indiscriminately vacuumed up because it’s not feasible to check each piece for permission. (Or do you naively believe that if I put a disclaimer on, say, a blog saying “this material is specifically not permitted to be used as training material for AI projects” means that it won’t be Hoovered in with everything else?)

                    And here’s some cool little factoid for you if you don’t believe that it’s being vacuumed up indiscriminately: Meta announced a new AI siphonbot and gave the information needed to block it. Two weeks after they started using it. And this is generally positive behaviour. Most of the AI bot-crawlers have been found out by sleuthing, not by an announcement. Even AI research teams at universities aren’t doing the basics of ethical conduct: getting consent.

                    Do you not see any circumstances in which it could be useful?

                    Yes. It’s very useful for non-creatives to pretend they’re actually creative when they send a machine to stitch together the corpse of human culture in entertaining new shapes rendered from rotting flesh. Personally, though, I can live without masterpieces like “Sonic the hedgehog gives birth to Borat” or whatever idiotic shit these keyboard monkeys think is art.

                    Doesn’t mean the tech isn’t useful cause you’ve not seen it used for anything good.

                    There is no use sufficiently good to justify the dismemberment and destruction of human culture. Sorry.