• frozenfoxx@pawb.social
    link
    fedilink
    English
    arrow-up
    22
    ·
    10 months ago

    As a member of the games industry for quite a few years now…

    …good. Couldn’t happen to a nicer bunch of studios.

  • huginn@feddit.it
    link
    fedilink
    English
    arrow-up
    19
    ·
    10 months ago

    As a programmer: most people vastly overestimate the efficacy of large language models.

    CEOs seem to overestimate them even more than everyone else.

    A lot of AI researchers think LLMs are a dead end (See: Timnit Gebru) because by their structure they cannot understand truth.

    The “hallucinations” are intrinsic to the structure and the best minds are saying there’s no way around that.

    We might be able to cludge together filters over it but at some point that’s just hard coding the world anyways, which is what LLMs are supposed to avoid.

    • secrethat@kbin.social
      link
      fedilink
      arrow-up
      10
      ·
      10 months ago

      As a data scientist, people seem to just attribute anything that is a computer and they don’t understand to AI or worse ChatGPT. Shudder

      • SuperDuper@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        I prefer people misattributing everything to AI over people using the word “auto-magic-ally” to describe anything happening on the back end.

    • sj_zero@lotide.fbxl.net
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      I’ve been using chatgpt a lot, and it’s really clear to me it has many uses, but it’s almost more like asking your buddy who knows a lot but is full of shit too – sometimes he tells you exactly what you need, sometimes he sends you on a wild goose chase with all kinds of false leads.

      In the end you need your own competence because the human needs to be able to make a final decision about whether to listen or not.

      • huginn@feddit.it
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 months ago

        My EM suggested an integration using an SDK that doesn’t exist.

        He was very insistent that we just hadn’t read the docs.

        Then it came out that it was chat gpt suggesting it.

    • candybrie@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Have you seen the work where they use another instance to fact check the first? The MS Research podcast made it seem like a really viable way to find hallucinations without really needing to code more. I’m curious if other people find that works or if MS researchers are just too invested in gpt.

      • huginn@feddit.it
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 months ago

        I’ll check out that podcast but I’m deeply skeptical that one LLM can correct another since neither of them truly understands anything: it’s all statistics. Very detailed stats but still stats.

        And stats will be wrong.

        Before chatgpt released most Google AI engineers were looking into alternatives to LLMs as the limitations of an LLM were increasingly clear.

        They’re convincing facsimiles of intelligence and a good tool for maybe 80% of basic uses.

        But I agree with the consensus: they’re a dead end in our search for intelligence and their output is vastly overestimated

          • huginn@feddit.it
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            Follow-up: I found the episode very unconvincing.

            A few points:

            • this was recorded early 2023 during the peak hype of generative AI
            • the guest immediately started making outlandish statements like “cancer will be solved in 10 years”, a statement entirely outside of his field of expertise: bad start but I kept listening all the way through
            • statements like “we have no idea how it answered a “give me a reason” to an AP bio question” demonstrate how out of touch both he and the head of open AI are with the work, if that story was even true. There are clear and easy explanations for it: the model has extensive training in formal education question and answer formats being the first.
            • the guest is the head of AI at Microsoft and has been in the field for 20 years: which is less of a flex than you might think. It means he has a literal vested interest in this being the next big thing. He spends 1/4 the episode selling Microsoft as the big integration for AI into everyone’s lives.
            • the solution to hallucination suggested hasn’t born fruit as far as I’m aware: hallucinations cannot be consistently detected by other instances.
            • he immediately makes claims about superhuman AI appearing in the next 5-10 years when there is 0 indication that’s close
            • he immediately anthropomorphizes the ai talking about it “reasoning”. It’s literally weighted functions. It doesn’t reason: it pushes input through a predetermined path and outputs a response. There’s no consideration, no extra steps: it just transforms input into output by training. Stochastic parrot.

            He seems like a salesman who has fallen for his own pitch.

            • candybrie@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              Thanks for listening and echoing some of my own doubts. I was kind of getting the feeling that MS Researchers were too invested in gpt and not being realistic about the limitations. But I hadn’t really seen others trying the two instance method and discarding it as not useful.

              • huginn@feddit.it
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 months ago

                Here’s a recent story about hallucinations: https://www.cnn.com/2023/08/29/tech/ai-chatbot-hallucinations/index.html

                The tldr is nobody has solved it and it might not be solvable.

                Which when you think of the structure being LLMs… that makes sense. They’re statistical models. They don’t have a grounding in any sort of truth. If the input hits the right channels it will output something undefined.

                The Microsoft guy tries to spin this as “creativity!” but creativity requires intent. This is more like a random number generator outputting your tarot and you really buying into it.

        • sj_zero@lotide.fbxl.net
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          They’re treated like something more than they are because we anthromorphise everything, and in our brains we assume anything that can string a sentence together is intelligent. “Oh, it can form a sentence! That must mean it’s pretty much already general intelligence since we gauge the intelligence of humans by the sentences they say!”

  • SuperDuper@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    Good to hear that someone’s standing up for them.

    The video game industry is probably one of the most exploitative tech industries around. So many people have dreamed of making games, many (myself included) got into software development in the first place because of such dreams. These studios take advantage of these dreams to lure young, starry-eyed devs in to their “fun” company and proceed to grind that enthusiasm into the dust with excessive “crunch time” hours, relatively low pay, and poor working conditions.