• chebra@mstdn.io
    link
    fedilink
    arrow-up
    7
    ·
    3 months ago

    @cmnybo @marvelous_coyote That’s… not how it works. You wouldn’t see any copyrighted works in the model. We are already pretty sure even the closed models were trained on copyrighted works, based on what they sometimes produce. But even then, the AI companies aren’t denying it. They are just saying it was all “fair use”, they are using a legal loophole, and they might win this. Basically the only way they could be punished on copyright is if the models produce some copyrighted content verbatim.

      • chebra@mstdn.io
        link
        fedilink
        arrow-up
        2
        ·
        3 months ago

        @ReakDuck Yup, and that’s a much better avenue to fight against the AI companies. Because fundamentally, this is almost impossible to avoid in the ML models. We should stop complaining about how they scraped copyrighted content, this complaint won’t succeed until that legal loophole is removed. But when they reproduce copyrighted content, that could be fatal. And this applies also to reproducing GPL code samples by copilot for example.

        • ReakDuck@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          Yeah, you just summarize my thoughts I had before chatGPT came to light.

          Ok, not really. My thoughts were: could I store a Picture made illegaly into an LLM and later on ask it to show it again? Because I never stored it as a file and LLMs seem to not count as a storage.

          I could store Pictures I would not be allowed to.