• intensely_human
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 months ago

    comprehensive lack of understanding of what LLMs do and what “prompting” even is. you’re not giving instructions to an agent, you are feeding a list of words to prefix to the output of a word predictor

    Why not both? Like, a mouse is nothing but chemical reactions but a mouse is also an intelligent thing. A house is just wood and plaster but it’s also a home. A letter is just ink on wood fibers but it’s also a job offer.

    An LLM is nothing but predictive text generator / statistical prompt completer / glorified autocomplete / an array of matrices of floating point numbers / a csv file.

    But it’s also a personlike mind that thinks and follows instructions simply because the following of instructions was a behavior manifest in the set of utterances it was shaped around.

    Happy to break any of these seemingly woo words down into precise engineering definitions if you need, but please trust I’m using them because they’re the shortest way to convey legit concepts when I say:

    The trained model has absorbed the spirit of those whose speech it trained on. That spirit is what responds to instructions like a person, and which responds to being addressed as “You”.

    That’s why addressing it as “you” works at all.