A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.

The Paper (No “Zero-Shot” Without Exponential Data): https://arxiv.org/abs/2404.04125

    • just another dev@lemmy.my-box.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 months ago

      On the other hand, if we move from larger and larger models with as much data they can gather to less generic and more specific high quality datasets, I have a feeling there’s still a lot to gain. But quality over quantity takes a lot more effort to maintain.

    • magic_lobster_party@kbin.run
      link
      fedilink
      arrow-up
      7
      ·
      5 months ago

      The video is more about the diminishing returns when it comes to increasing size of training set. It’s following a logarithmic curve. At some point, just “adding more data” won’t do much because the cost will be too high compared to the gain in accuracy.