Wikipedia has a new initiative called WikiProject AI Cleanup. It is a task force of volunteers currently combing through Wikipedia articles, editing or removing false information that appears to have been posted by people using generative AI.

Ilyas Lebleu, a founding member of the cleanup crew, told 404 Media that the crisis began when Wikipedia editors and users began seeing passages that were unmistakably written by a chatbot of some kind.

  • Petter1
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    edit-2
    3 hours ago

    Maybe a strange way of activism that is trying to poison new AI models 🤔

    Which would not work, since all tech giants have already archived preAI internet

    • schizo@forum.uncomfortable.business
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 hours ago

      Ah, so the AI version of the chewbacca defense.

      I have to wonder if intentionally shitting on LLMs with plausible nonsense is effective.

      Like, you watch for certain user agents and change what data you actually send the bot vs what a real human might see.

      • Dragonstaff@leminal.space
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        I suspect it would be difficult to generate enough data to intentionally change a dataset. There are certainly little holes, like the glue pizza thing, but finding and exploiting them would be difficult and noticing you and blocking you as a data source would be easy.

      • Petter1
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 hours ago

        I never told that I think it is smart…