• Iunnrais
    link
    fedilink
    English
    arrow-up
    134
    arrow-down
    1
    ·
    11 hours ago

    Just let anyone scrape it all for any reason. It’s science. Let it be free.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      4 hours ago

      The OP tweet seems to be leaning pretty hard on the “AI bad” sentiment. If LLMs make academic knowledge more accessible to people that’s a good thing for the same reason what Aaron Swartz was doing was a good thing.

      • Ashelyn@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        2 hours ago

        On the whole, maybe LLMs do make these subjects more accessible in a way that’s a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.

        The problem is that LLMs ‘hallucinate’ details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it’s essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.

        ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they’re intelligent. They’re very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.

        • chicken@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          44 minutes ago

          Ok, but I would say that these concerns are all small potatoes compared to the potential for the general public gaining the ability to query a system with synthesized expert knowledge obtained from scraping all academically relevant documents. If you’re wondering about something and don’t know what you don’t know, or have any idea where to start looking to learn what you want to know, a LLM is an incredible resource even with caveats and limitations.

          Of course, it would be better if it could also directly reference and provide the copyrighted/paywalled sources it draws its information from at runtime, in the interest of verifiably accurate information. Fortunately, local models are becoming increasingly powerful and lower barrier of entry to work with, so the legal barriers to such a thing existing might not be able to stop it for long in practice.

      • Natanox@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        68
        ·
        11 hours ago

        It’s a US “non-profit”. One that demands 19$ per article which they merely provide as aggregator, they don’t own shit.

        Utterly absurd.

        • sunzu2@thebrainbin.org
          link
          fedilink
          arrow-up
          34
          arrow-down
          1
          ·
          11 hours ago

          Non profit here merely means they are exemot from US income taxes so they are grifting even hardrr on us.

          MIT is grifting in a similar but bigger manner.

        • Flocklesscrow
          link
          fedilink
          English
          arrow-up
          13
          ·
          9 hours ago

          Which means they’re adding profit margin to the otherwise zero marginal cost of said information good.