Something I’ve found disappointing in the “AI conversation” around me …

… there hasn’t been enough honest introspection about how this whole thing feels and likely will feel.

Like, there’s something disturbing in AI’s first “success” being “art” and “music”.

There’s something disturbing about how we were never going to be able to help ourselves & are compelled to make things like LLMs, but can still be frightened by its implications.

anger v hype leaves all that out

@casualconversation

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    18 days ago

    I mean, it’s a little arbitrary to say “AI begins here”. They’re trained off datasets, but I would say that, for example, OCR is very successful, has been around for a while, and definitely uses machine learning.

    Ditto for speech recognition.

    On the other hand, none of these are capable of generalized problem-solving, either, AGI, which is really the sort of thing that I’m usually thinking of as being significant.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    ·
    17 days ago

    I feel like people don’t discuss it much, because it would mostly amount to “Capitalism, amiright?”. It’s not surprising that companies have no consideration for their actions, other than how it affects shareholder value.

    I’m certainly not fond of artistic careers not being viable anymore, especially since LLMs hardly create new jobs to catch these people.

    Also really not a fan of all the spam that killed internet search, nor the climate impact. For a moment, I felt like we had the IT industry back on track, after cryptomining folded. Nope, here’s a way to burn tons of energy for you to generate some text or image that is unlikely to contribute much to anything.

    At the same time, of course, there’s some opportunities there. Mozilla is generating alt texts for images, so that visually impaired folks have a description to go off of. Like, that feels worth it.

    If we switch to 100% regenerative energy and solve the unemployment problems (which I’m not holding my breath for), then I would also be onboard with having some fun with it. Then we can build videogames and have tons of texts in there, which get voiced by some AI.

    Like, that’s definitely where feeling comes in. If the ethics of it are garbage, then I cannot get excited about dicking aroubd with it. I know many people ignore the ethics, but that is just weird to me.

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    17 days ago

    Personally I am hopeful about it. For a long time technology has been a factor in a bigger slow motion collapse of the economy being viable to sustain the lives of regular people, and the recent breakthroughs in machine learning do contribute to that collapse. But it’s not totally monopolized, local models are becoming viable and a lot of the work is open sourced, so the power of this stuff goes to anyone who wants it and has an idea of what they would want to do with it. Massive change is inevitable, the question is just what direction it goes in.

  • There’s something disturbing about how we were never going to be able to help ourselves & are compelled to make things like LLMs, but can still be frightened by its implications.

    Don’t be frightened by LLMs. LLMs are the latest bitcoin-like fiasco—the latest in a long, proud tradition of bullshit technologies sold by bullshit human beings. It’s crap technology with no real use case. On the surface, if you just casually look them over, they look impressive, but the more you look into them, the less impressive and useful they seem and the more hollow the promises of the people saying “BUT THE NEXT GENERATION WILL BE AWESOME” sound.

    Bitcoin/blockchain/yaddayaddayadda seemed unstoppable … until it became an obvious pile of scams, grifts, and failure. LLMs are following that graph on speed-walk.

    • Thorny_Insight
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      17 days ago

      While LLMs might not be the path to AGI (though they might) there’s still quite the difference between gpt2 and 4. What’s gpt5 going to be like? Or 8? No one can know how good it can get even if it’s just faking intelligence. Atleast one thing is for sure; the current version of it is the worst it’ll ever be.