With the rapid advances we’re currently seeing in generative AI, we’re also seeing a lot of concern for large scale misinformation. Any individual with sufficient technical knowledge can now spam a forum with lots of organic looking voices and generate photos to back them up. Has anyone given some thought on how we can combat this? If so, how do you think the solution should/could look? How do you personally decide whether you’re looking at a trustworthy source of information? Do you think your approach works, or are there still problems with it?

  • Showroom7561@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    This is an unfortunate future. Unless something is done fast, the majority of content on the internet will simply be generated content with bots interacting with other bots.

    Unless we only allow users who verify their identity to participate on certain websites, I can’t see how else you could solve this problem.

    Even then, some bad actors with a verified identity could be generating content using AI and posting it as their own.

    I’m not even sure how anyone will be able to trust or believe any photo, video, or written idea online in the next 5 to 10 years.

    • howrar@lemmy.caOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I think the idea with a verified identity is that each person only gets one. If that is the case and you find misinformation from them, it’s easy to block the one account. It’s not so easy to block if there are thousands of accounts made by the same person.

      I don’t know how you would be able to enforce a one ID per person limit though. Government identification requires either trust in the government and/or in the entity verifying your identity, and your government providing useful identification in the first place. Phones numbers don’t work because a single person can acquire multiple numbers, many have none, and numbers get transferred to different people.