• simple
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    1 year ago

    This is cool and all but all these AI hype demos always show something extremely simple and overdone in the training data. Reminds me of those ChatGPT videos where someone writes “make me Pong in javascript” and gets surprised that it does despite the fact that it probably trained on 100,000+ Pong in Javascript scripts.

    Yes, it made Breakout, very cool. Now ask it to do something new or make a game that’s more complicated than a game that’s been remade a million times. LLMs are very far away from making real games.

    • Rednax@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      In order to get good results out of an LLM, you need to be very precise in what you want. Even if it can spit out an entire game, you will have to describe it so well, you are basically creating the entire game yourself. But instead of using a standard programming language, you are using something understood by the LLM.

      • didnt_readit@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Exactly haha, we were joking about this exactly thing at work when all the hype started (I’m a software engineer). We were like, “to make a whole app you’ll have to tell an AI super specifically every little behavior you want the app to do. Do you know what telling a computer very specifically what you want it to do is called? Programming.” lol

        I’m excited about the potential for LLMs as coding tools (and I already use them to help with various programming related things), but I’m not worried about my job being replaced any time soon.

    • mindbleach@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      This is a testable hypothesis.

      Make up some new game - doesn’t have to be good or terribly unique, just novel enough to have negligible prior art - and see if the robot does the thing.

        • mindbleach@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’m not the one playing ‘AI is whatever hasn’t been done.’

          I’m thoroughly familiar with how these technologies work and their present shortcomings. This guy is joking when he says programming is over. But it doesn’t take a diehard fanboy to acknowledge that this was impossible a year ago, and is getting more democratized as it gets more capable, and it is getting more and more capable.

          • nanoUFO@sh.itjust.worksM
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            democratized as it gets

            Eh openAI and co would really rather this be regulated very very hard and for all training data to not be accessibly anymore like it was in the past.

            • mindbleach@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              They can try.

              Not like a corpus of published text is big data or privately held.

              Not like anyone training porn robots on booru tags gives a shit about copyright.

              Not like any model that’s been released can be put back in the bottle.

              OpenAI is struggling right now because they’ve realized they can’t afford to be centralized and they can’t compete at being decentralized. If big-iron approaches are truly more capable - they lose to Google. If advancements keep coming from rando engineers dropping whitepapers with stupid acronyms - they lose to everybody.

              Personally, I’m betting on a grab-bag of Asian surnames and LLaLLeLLuLLemon.