• Rikudou_Sage@lemmings.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    10 months ago

    Someone in the Hacker News discussion:

    It’s getting started. Serious use cases never have the glamour of hype. But I am starting to see generative AI cover more and more ground into business utility.

    I fully agree. For example the company I work at is just getting started with some serious generative AI features. This was just the first wave where everyone was fascinated by the seemingly human-like responses from a machine, now we’re past that and serious use cases are emerging.

  • Lvxferre@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    10 months ago

    Yeah, pretty much.

    I was going to make an elaborated analogy between LLMs and taxidermy, but I think that a bunch of short and direct sentences will do a better job.

    LLMs do not replicate human Language¹. Humans don’t simply chain a bunch of words²; we refer to concepts, and use words to convey those concepts. It’s silly to see the incorrect output as “just hallucination” and assume that it’ll be fixed later, when it’s a sign of internal issues.

    So at the start, people got excited and saw a few potential uses for the underlying tech. Then you got overexcited morons³ hyping the whole thing out. Now we’re in the rebound, when plenty people roll their eyes and move on. Later on, at least, I expect two things to happen:

    • People will be in a better position to judge the usefulness of LLMs.
    • Text generation will move on to better technologies.

    1. When I say “Language” with a capital “L”, I’m referring to the human faculty that is used by languages (minuscule “l”) like Kikongo, English, Mandarin, Javanese, Arabic, etc.
    2. I’m not going into that “what’s a word” discussion here.
    3. My bad, I’m supposed to call them by an euphemism - “early adopters”.
  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    10 months ago

    This is the best summary I could come up with:


    Silicon Valley salivated over the prospect of a transformative new technology, one that it could make a lot of money off of after years of stagnation and the flops of crypto and the metaverse.

    That partnership led to Microsoft’s big February announcement about how it was incorporating a custom chatbot built with OpenAI’s large language model (LLM) — this is also what powers ChatGPT — into Bing, its web search engine.

    Meta, not to be outdone and possibly still smarting from its disastrous metaverse pivot, released not one but two open source(ish) versions of its large language model.

    According to Statcounter, Microsoft’s web browser, Edge, which consumers had to use in order to access Bing Chat, did get a user bump, but still barely moved the needle and has already started to recede, while Chrome’s market share increased during that time.

    We now have myriad examples of chatbots going off the rails, from getting really personal with a user to spouting off complete inaccuracies as truth to containing the inherent biases that seem to permeate all of tech.

    Last week, eight companies behind LLMs, including OpenAI, Google, and Meta, took their models to DEF CON, a massive hacker convention, to have as many people as possible test their models for accuracy and safety in a first-of-its-kind stress test, a process called “red teaming.” The Biden administration, which has been making a lot of noise about the importance of AI technology being developed and deployed safely, supported and promoted the event.


    The original article contains 1,428 words, the summary contains 250 words. Saved 82%. I’m a bot and I’m open source!