You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • Microw
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    6 months ago

    IMO these issues are mainly with the interface / how the AI summaries are presented.

    The issue with incorrect answers like the glue on pizza one isnt “hallucination”. The LLM is pulling that info from an existing webpage (The Onion). The thing they need to change is how that info is portrayed. Not “one tip is to use glue”, but rather “the satirical site the Onion says to use glue”.

    Hallucination should be combatted by the fact that the AI cant show a proper source for facts it made up itself.

    • Iheartcheese@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 months ago

      Eating rocks came from The Onion. Putting glue on pizza was one random ass comment from over a decade ago on reddit by a dude named fucksmith

      • Microw
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        My bad. Doesnt change what I mean though: the AI should not say “it’s also great to put glue on the pizza” - it should either not reference that at all or say “fucksmith on reddit recommends glue on pizza”.

    • Naatan@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      I think you nailed it. That’s exactly why I want more of this type of conversation. Before we can innovate we have to acknowledge the limitations of the technology.