• umbrella@lemmy.ml
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    10
    ·
    2 months ago

    you guys joke but AI npcs have the potential of being awesome

    • ulterno@lemmy.kde.social
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      4
      ·
      2 months ago

      A really good place would be background banter. Greatly reducing the amount of extra dialogues the devs will have to think of.

      1. Give the AI a proper scenario, with some Game lore based context, applicable to each background character.
      2. Make them talk to each other for around 5-10 rounds of conversation.
      3. Read them, just to make sure nothing seems out of place.
      4. Bundle them with TTS for each character sound type.

      Sure, you’ll have to make a TTS package for each voice, but at the same time, that can be licensed directly by the VA to the game studio, on a per-title basis and they too, can then get more $$$ for less work.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        17
        ·
        2 months ago

        they too, can then get more $$$ for less work.

        I’m pretty sure it’ll be less money for less work, at least after the first few titles. Companies really don’t like paying more than they have to.

      • Rekorse@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        2 months ago

        They won’t because of hallucinations. They could work in mature games though where its expected that whatever the AI says is not going to break your brain.

        But yeah a kid walks up to toad in the next Mario game and toad tells Mario to go slap peaches ass, that game would get pulled really quick.

        • ulterno@lemmy.kde.social
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          1
          ·
          2 months ago

          I just re-read my comment and realised I was not clear enough.
          You bundle the text and the AI-TTS. Not the AI text generator.

            • ulterno@lemmy.kde.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              2 months ago

              The content is… AI assisted (maybe a better way to put it).
              And yes, now you don’t need to get the VA every time you add a line, as long as the License for the TTS data holds.

              You still want to be having proper VAs for lead roles though. Or you might end up with empty feeling dialogues. Even though AI tends to put inflections and all, from what I have seen, it’s not good enough to reproduce proper acting.
              Of course that would mean that those who cannot do the higher quality acting [1] will be stuck with only making the TTS files, instead of getting lead roles.

              But that will mean that now, places where games could not afford to add voice, they now can. Specially useful for cases where someone is doing a one dev project.

              Even better if there can be an open standard format for AI training compatible TTS data. That way, a VA can just pay a one time fee to a tech, to create that file, then own said file and licence it whichever way they like.


              1. e.g. most Anime English dubs. I have seen a few exceptions, but they are few enough to call exceptions ↩︎

              • Rekorse@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 months ago

                You know the way these programmers talk about AI, I think they just don’t want to have to work with anyone else.

                How is this not taking from voice actors and giving to yourself in that regard? The system you described would mean only the biggest names get paid, all so a developer can avoid learning social skills.

                • ulterno@lemmy.kde.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  You are right. I don’t want to have to socialise just to add a bit of voice to my game characters.
                  If I have to, I’d rather ship without voicing any of them.

                • ulterno@lemmy.kde.social
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  The system you described would mean only the biggest names get paid

                  Rather, it’s more like, we as the user get a greater variety of background NPC banter, for the same game price.

                  Take X4 for instance. The only banter we get is different types of “hello”.
                  Only in cases of quests, is there any dialogue variety. When there is any such banter out of quests, it’s mostly incoherent (or was that another game, I need to check again).
                  It doesn’t really make sense that 2 or more people meet in a docking area, say, “Hi”, “Hello”, “Good day to you” and then just keep on standing staring at each other’s faces as if they were using some sort of telepathy, or just staring at each other without any conversation.
                  It would be fun to be able to have conversations that, while clear that they would not be able to yield any Quest, should still have variety enough to be fun when the player stops by, eavesdropping.
                  This sort of thing is there in a lot of games by high budget studios, while at the same time, the games have pretty large file sizes.
                  This way, we can reduce both production and distribution costs.

                  And the VAs, they don’t need to do all the work of speaking each dialogue every time the story writers come up with new banter, but the studio will be getting their voice for those lines, essentially increasing the value of the licensed TTS package, meaning the VA gets more work done than the work they do and gets paid more (well, the last part depends more upon the market condition).

                  • Rekorse@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    2 months ago

                    As a consumer I’d rather a real person voice acted it live or not at all. Thats petty to put your entertainment above someone’s livelihood.

        • cheddar@programming.dev
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 months ago

          Oh come on, LLMs don’t hallucinate 24/7. For that, you have to ask a chatbot to say something it wasn’t properly trained for. But generating simple texts for background chatter? That’s safe and easy. The real issue is the amount of resources required by modern LLMs. But technologies tend to become better with time.

          • SynopsisTantilize
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            I still really don’t understand what amount of local resources it would require to run a trained LLM

    • c0ber@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      2 months ago

      ai being used for good may unfortunately have to wait till the destruction of capitalism

        • c0ber@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 months ago

          it doesn’t do well most of the things it’s shoved in as a buzzword to try and impress shareholders, that doesn’t mean it’s completely useless

        • borari@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 months ago

          I currently use AI, through Nvidia Broadcast, to remove the sound of the literal server rack feet away from my xlr mic in my definitely not sound treated room so people I’m gaming with don’t wind up muting me. It also removes the clickety clack of my blue switches and my mouse clicks, all that shit.

          It’s insanely reliable, and honestly a complete godsend. I could muck around with compressors and noise gates to try to cut out the high pitch server fan whine, but then my voice gets wonky and I’ve wasted weeks bc I’m not an audio engineer but I’m obsessive, and the mic is still picking up my farts bc why not use an xlr condenser mic to shit talk in cs?

          Edit - Oh, I also use the virtual speaker that Broadcast creates as the in-game (or Discord or whatever) voice output, and AI removes the same shit from other people’s audio. I’ve heard people complaining about background music from another teammates open mic while all I hear is their perfectly clear voice. It’s like straight up magic.