• ArchRecord
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    2 months ago

    And when traditional AI programs can be run on much lower end hardware with the same speed and quality, those chips will have no use. (Spoiler alert, it’s happening right now.)

    Corporations, for some reason, can’t fathom why people wouldn’t want to pay hundreds of dollars more just for a chip that can run AI models they won’t need most of the time.

    If I want to use an AI model, I will, but if you keep developing shitty features that nobody wants using it, just because “AI = new & innovative,” then I have no incentive to use it. LLMs are useful to me sometimes, but an LLM that tries to summarize the activity on my computer isn’t that useful to me, so I’m not going to pay extra for a chip that I won’t even use for that purpose.