• Ogmios@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    3 months ago

    so this is actually the best the AI researchers can do

    Highly unlikely. This is what corporation’s public facing products can do.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      21
      ·
      3 months ago

      are there mechanisms known to researchers that Microsoft’s not using that can prevent this type of failure case in an LLM without resorting to whack-a-mole with a regex?

      • Ogmios@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        3 months ago

        To be blunt, LLMs are one of the stupider ways to try and use AI. There is incredible potential in many other applications which don’t attempt to interface with something as irrational and unpredictable as people.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          20
          ·
          3 months ago

          I agree; LLMs and generative AI are indelibly a product of capitalism, and they can’t exist without widespread theft, exploitation of labor, massive concentrations of capital, and a willingness to destroy the environment. they are the stupidest use of technology I’ve ever seen, and after cryptocurrencies the bar for stupid was pretty fucking high. that the products themselves obscure the theft and exploitation that went into training them is a feature for the corporations developing this horseshit, not a bug.

          and that’s why it’s notable that the self-described AI researchers behind these garbage products can’t even do basic shit like have the LLM not call a journalist a pedophile without resorting to an absolute hack that won’t scale. there’s no fixing LLMs; systemically, they are what they are. and now this absolute horseshit is a component of what’s unfortunately still the dominant desktop operating system.

          • sc_griffith@awful.systems
            link
            fedilink
            English
            arrow-up
            10
            ·
            edit-2
            3 months ago

            I’m ngl I think crypto is even stupider. it’s a real competition though

            EDIT: idea. a tech bullshit bracket

          • Ogmios@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            3 months ago

            The really fucking dumb part of it, you can believe me or not, is that this appears to all circle back to ancient misunderstandings about the nature of man, and attempts to create automatons which behave like men but are perfectly obedient. There is a subset of the population which tries this exact same bullshit with every new technology we create.

            • self@awful.systems
              link
              fedilink
              English
              arrow-up
              8
              ·
              3 months ago

              I can see that as being one of the influences that fed into the formation of the TESCREAL belief package — “I have an automaton that behaves like a person but with supernatural qualities” really is an ancient grift, and the TESCREAL belief in omnipotent AGI being just around the corner is that same grift taken to an extreme

          • schizo@forum.uncomfortable.business
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 months ago

            indelibly a product of capitalism

            They’re being funded by the capitalists that want to replace all those annoying human workers with the cheapest possible alternative.

            Of course, the problem is that while a LLM is the cheapest possible option, it’s turning out that it’s the most useless and garbage one too.

            (Also, I’m shockingly infuriated that the tech workers that would end up being the ones replaced the soonest are so busy licking boots rather than throwing their shoes into the machinery.)

            • self@awful.systems
              link
              fedilink
              English
              arrow-up
              9
              ·
              3 months ago

              Also, I’m shockingly infuriated that the tech workers that would end up being the ones replaced the soonest are so busy licking boots rather than throwing their shoes into the machinery.

              so much of our industry is dedicated to ensuring that tech workers, most of whom consider themselves experts on complex systems, never analyze or try to influence the social systems surrounding and influencing their labor. these are the same loud voices that insist tech isn’t political, while turning important parts of our public and open source tech infrastructure into a Nazi bar.

                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  3 months ago

                  it’s almost always a shockedpikachu situation, where they just can’t believe that it happened to them. every so often one of these cases pops up on HN or does the rounds on IMs and community spaces

              • schizo@forum.uncomfortable.business
                link
                fedilink
                English
                arrow-up
                4
                ·
                3 months ago

                I don’t know if it’s the system keeping them from analyzing it so much as it’s simply that a good number of tech bros fall into the Actually a Nazi or the Paid Enough They Don’t Care categories and for the most part happily keep on doing what they’ve been doing. If any of them had actual ethics or morals they’d take action, but they just plain don’t.

                Perhaps I’m too cynical, but after 20+ years working in tech, with most of the last 10 being in an abuse role at a PaaS company and seeing how management is willing to play endless whataboutism (my favorite was our Jewish lead council going on about how there’s nothing wrong with Nazis having a voice and a platform, and then some crazy story about bullying when he was a kid) and the majority of non-management is happy to just shrug and play along and so you end up with a Nazi bar, as you mentioned.

                The problem is that, right now, ALL the bars in this horribly tortured analogy are Nazi bars and everyone gets subjected to the Nazi propaganda.

            • Ogmios@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              3 months ago

              Also, I’m shockingly infuriated that the tech workers that would end up being the ones replaced the soonest are so busy licking boots rather than throwing their shoes into the machinery.

              Just because you aren’t hearing about us, doesn’t mean we don’t exist. ;)

      • linearchaos@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        8
        ·
        3 months ago

        Yeah there’s already a lot of this in play.

        You run the same query multiple times through multiple models and do a web search looking for conflicting data.

        I’ve had copilot answer a query, then erase the output and tell me it couldn’t answer it after about 5 seconds.

        I’ve also seen responses contradict themselves later paragraphs saying there are other points of view.

        It would be a simple matter to have it summarize the output it’s about to give you and dump the output of it paints the subject in a negative light.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          ·
          3 months ago

          It would be a simple matter to have it summarize the output it’s about to give you and dump the output of it paints the subject in a negative light.

          “it can’t be that stupid, you must be prompting it wrong”

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          3 months ago

          It would be a simple matter to have it summarize the output it’s about to give you and dump the output of it paints the subject in a negative light.

          lol. like that’s a fix

          (Hindenburg, hitler, great depression, ronald reagan, stalin, modi, putin, decades of north korea life, …)

          • blakestacey@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            3 months ago

            Hindenburg, hitler, great depression, ronald reagan, stalin, modi, putin, decades of north korea life, …

            🎶 we didn’t start the fire 🎶

        • bitofhope@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          3 months ago

          Exactly, and all of this is a simple matter of having multiple models trained on different instances of the entire public internet and determining whether their outputs contradict each other or a web search.

          I wonder how they prevented search engine results from contradicting data found through web search before LLMs became a thing?

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            6
            ·
            3 months ago

            They didn’t really have to before LLM. Search engine results, in the heyday we’re backlink driven. You could absolutely search disinformation and find it. But if you searched for a credible article on someone, chances are more people would have links to the good article than the disinformation. However, conspiracy theories often leaked through into search results. And in that case they just gave you the web pages and you had to decide for yourself.

            • bitofhope@awful.systems
              link
              fedilink
              English
              arrow-up
              10
              ·
              3 months ago

              They didn’t really have to before LLM.

              No shit. Maybe they should just get rid of the extra bullshit generator and serve the sources instead of piling more LLM on the problem that only exists because of it.

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              8
              ·
              edit-2
              3 months ago

              this naive revisionist shit still standing in ignorance of easily 15y+ of SEO-fuckery (first for influence, and then for spam) is hilarious