• remotelove@lemmy.ca
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      5 months ago

      Math and limited data probably. If the AI “sees” that its forces outnumber an opponent or a nuke doesn’t affect it’s programmed goals, it’s efficient to just wipe out an opponent. To your point, if the training data or inputs have any bias, it will probably be expressed more in the results.

      (Chat bots are trained on data. How that data is curated is going to be extremely variable.)

      • Rentlar@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        5 months ago

        How do we eliminate human violence forever?

        Easy! Just eliminate all of humankind!

        (Bard, ChatGPT, you’d better not be reading this)

      • hangukdise@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        That data does not contain examples of diplomacy since that stuff is generally discrete/secret