• Flying Squid@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    11
    ·
    1 month ago

    Is it? Crypto could generate money for you. This can tell you lies based on its hallucinations.

    At least when you cashed out, you knew you were cashing out at the value you were being told you were cashing out at. If there were some weird merger between crypto and AI, you’d sell your AIcoin thinking it was valued at $100 but it would turn out it was actually valued at $2 and the AIcoin just told you it was $100.

    I think crypto is stupid and annoying and a waste of energy. This is stupider and more annoying and a bigger waste of energy and, worst of all, officially embraced by every major tech company.

    • howrar@lemmy.ca
      link
      fedilink
      arrow-up
      10
      ·
      1 month ago

      Crypto could generate money for you.

      Crypto moves money from one hand to another. It doesn’t create value in and of itself.

      • Flying Squid@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        5
        ·
        1 month ago

        That’s not what I mean. I mean it can personally increase someone’s net worth, especially if they check out at the right time.

        • howrar@lemmy.ca
          link
          fedilink
          arrow-up
          5
          ·
          1 month ago

          Yeah, I understood what you mean. I’m saying that’s not better because there’s no value being created. At least AI is capable of doing some useful work for us.

          You can even argue that it can make you money. Invest in a tech company involved in AI, cash out at the right time, boom, “free” money.

    • Kedly
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      1 month ago

      Coders and artists are already making heavy use of AI, it doesnt magically do everything for you, and you have to check and curate it, but that doesnt make it entirely worthless. It’s FAR more useful than crypto

    • QuaternionsRock@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      1 month ago

      Out of curiosity - do you think your opinion will change once on-device (i.e., power efficient) AI becomes the norm?

      The capabilities and utility of contemporary LLMs are wildly overstated by many, but the claim that they are completely useless is dubious imo. Nothing they generate can be treated as fact (and shame on those who suggest you do), but I can say with certainty that it has made my life as an indie programmer much easier, and I know I’m not alone in that.

      • Flying Squid@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        1 month ago

        Okay, sorry, here is my real response since I thought you were talking about something else due to being in two conversations at once in the same thread:

        My opinion will change when AIs stop being untrustworthy. Until I can have any sort of certainty that it isn’t just making shit up, it won’t change.

        Not too long ago, I asked ChatGPT to tell me who I am. I have a unique name. I also have a long-established internet media presence under that name. I’m not famous, but I’ve got enough prominence for it to know exactly who I am.

        It had no idea whatsoever. It got it entirely wrong. It said I was a business entrepreneur who gave motivational lectures.

        • zazo@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          1 month ago

          idk bro that sounds like saying search engines aren’t useful cuz you couldn’t google yourself…

          • Flying Squid@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            1 month ago

            Except I can Google myself. Links and photos come up. None of them say I’m a business entrepreneur who gives motivational lectures.

            • zazo@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 month ago

              I’m sorry to break this to you - but you probably weren’t in the training dataset enough for the model to learn of your online presence - yes llms will currently hallucinate when they don’t have enough data points (until they learn their own limitations) - but that’s not a fundamentally unsolvable problem (not even top 10 i’d say)

              there already are models that consider their knowledge and just apologize if they can’t answer instead of making shit up. (eg claude)

              • Flying Squid@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                1 month ago

                Considering these LLMs are being integrated with search engines in a way that might work toward replacing them, don’t you think their training should include knowing who someone is that a bunch of hits come up for when you Google them?