Some of the world’s wealthiest companies, including Apple and Nvidia, are among countless parties who allegedly trained their AI using scraped YouTube videos as training data. The YouTube transcripts were reportedly accumulated through means that violate YouTube’s Terms of Service and have some creators seeing red. The news was first discovered in a joint investigation by Proof News and Wired.

While major AI companies and producers often keep their AI training data secret, heavyweights like Apple, Nvidia, and Salesforce have revealed their use of “The Pile”, an 800GB training dataset created by EleutherAI, and the YouTube Subtitles dataset within it. The YouTube Subtitles training data is made up of 173,536 YouTube plaintext transcripts scraped from the site, including 12,000+ videos which have been removed since the dataset’s creation in 2020.

Affected parties whose work was purportedly scraped for the training data include education channels like Crash Course (1,862 videos taken for training) and Philosophy Tube (146 videos taken), YouTube megastars like MrBeast (two videos) and Pewdiepie (337 videos), and TechTubers like Marques Brownlee (seven videos) and Linus Tech Tips (90 videos). Proof News created a tool you can use to survey the entirety of the YouTube videos allegedly used without consent.

  • ShadowRam@fedia.io
    link
    fedilink
    arrow-up
    1
    arrow-down
    10
    ·
    edit-2
    2 months ago

    made out of clips he didn’t have the rights

    See, and this is where your showing your ignorance in understanding how currently AI functions.

    Yes, it’s possible the AI could go and make shittier videos with its new knowledge. As could the novice plumber in the example I gave.

    But the AI isn’t copying clips of any videos.

    It’s not a repository of the videos/pictures or words it was exposed to, that it just recalls.

    LLMs do not model the world - Sean Carroll

    • subignition@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      2 months ago

      It generates new content that is based on patterns it has acquired from training data. The fact that you can’t readily trace/attribute output to specific parts of training data does not make it permissible for a human to cause the LLM to train on that data without permission of the rights holder, or in violation of the content provider’s ToS.

      I fear you are getting stuck nitpicking my analogy which was a bit simplified.

      • ShadowRam@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 months ago

        does not make it permissible for a human to cause the LLM to train on that data without permission of the rights holder

        Says who? These videos are out there for people (or things) to see.

        If someone was playing some videos to train their dog to to respond to a noise, what business is that of the rights holder?

        Show me were in the ToS over a year ago, where it says you’re not allowed to train an AI on the video.

        Rights holder can’t control what people are using the video for. They can control when and how it’s delivered, but not who’s actually watching it.

        • subignition@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          Says who? These videos are out there for people (or things) to see.

          What an awful troll you are. You conveniently didn’t quote the remainder of the sentence so you could try to nitpick a part of my response out of context.

          Read the “Permissions and Restrictions” section of the YouTube terms of service.