• earthquake
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Seriously, what kind of reply is this, you ignore everything I said except the literal last thing, and even then it’s weasel words. “Using agential language for LLMs is wrong, but it works.”

    Yes, Curtis, prompting the LLM with language more similar to its training data results in more plausible text prediction in the output, why is that? Because it’s more natural, there’s not a lot of training data on querying a program on its inner workings, so the response is less like natural language.

    But you’re not actually getting any insight. You’re just improving the verisimilitude of the text prediction.