Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

  • Dark Arc@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    1 year ago

    Sure it can, “print hello world in C++”

    #include 
    
    int main() {
      std::cout << "hello world\n";
      return 0;
    }
    

    “print d ft just rd go t in C++”

    #include 
    
    int main() {
      std::cout << "d ft just rd go t\n";
      return 0;
    }
    

    The latter is a “novel program” it’s never seen before, but it’s possible because it’s seen a pattern of “print X” and the X goes over here. That doesn’t mean it understands what it just did, it’s just got millions (?) of patterns it’s been trained on.

    • Serdan
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      A human would give you the same solution for the same reason. No dev would deeply ponder the meaning of “cout” if told to print something. It’s so simple it’s almost muscle memory.

      Hell, there are probably non-NN autocomplete systems that could successfully do that.

      GPT can do more than that though. You can have a conversation with it about what you’re trying to achieve and what the requirements should be and then you can tell it to write code on the basis of that natural language conversation. You can then discuss the code with it, make suggestions or ask for them.

      People who claim that it’s “just” looking up the answer in its training data seriously have no idea what they’re talking about.