Contrary to Silicon Valley wisdom, training AIs on larger data sets could worsen their tendency to replicate societal biases and racist stereotypes

  • Zeth0s@reddthat.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    The problem of current LLM implementations is that they learn from scratch, like taking a baby to a library and telling him “learn, I’ll wait out in the cafeteria”.

    You need a lot of data to do so, just to learn how to write, gramma, styles, concepts, relationships without any guidance.

    This strategy might change in the future, but the only solution we have now is to refine the model afterward, let’s say.

    Tbf biases are integral part of literature and human artistic production. Eliminating biases means having “boring” texts. Which is fine for me, but a lot of people will complain that AI is dumb and boring

  • Fixbeat@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    Chatgpt seems pretty smart. Why couldn’t the ai detect and reject racism?

    • Zarxrax@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Because it’s not smart. It does not have intelligence. It simply attempts to imitate human language.