• bitsplease@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    11 months ago

    I don’t know why people (not saying you, more directed at the top commenter) keep acting like cherry picking AI images in these studies invalidate the results - cherry picking is how you use AI image generation tools, that’s why most will (or can) generate several at once so you can pick the best one. If a malicious actor was trying to fool people, of course they’d use the most “real” looking ones, instead of just the first to generate

    Frankly the studies would be useless if they didn’t cherry pick, because it wouldn’t line up with real world usage

    • kase@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      Tbh I’m more concerned about how they chose the human faces. I can’t explain it, but it feels like they were biased toward choosing ‘fake-looking’ faces, lol

      • bitsplease@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        The way it sounds right now is “AI generated faces don’t have all these artifacts 99% of the time” (I’m paraphrasing A LOT, but you get what I mean.)

        The only way it sounds like that is if you don’t read the article at all and draw all your conclusions from just reading the title.

        Don’t get me wrong, I’m sure many do just that, but that’s not the fault of the study. They clearly state their method for selecting (or “cherry picking”) images