• Lvxferre@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    7 months ago

    Important detail: a language (like English, or Libras, or written Chinese) is a system that conveys general meaning, among other things. Without that, we can’t really claim that something is a language.

    This has the following consequences:

    • It’s at least possible that a hypothetical Q* model did reach an intelligence breakthrough, by handling abstract units of meaning (concepts) instead of “raw” tokens. However frankly, this whole story is looking more and more like OpenAI employees undergoing mass hysteria than like anything real.
    • There is some overlap between language and maths, when it comes to logic. However, maths are not really a language. It’s like saying that dogs are cats because dogs have fur, you know?
    • LLMs don’t really speak. They’re great at replicating grammatical patterns but, as shown by their hallucinations (that people often cherry pick “out”), they don’t handle meaning.

    For reference on the third point, give this a check. I have further examples highlighting that LLMs don’t understand what they’re uttering, if you want.