That’s nice dear, the amount of human hours tuning your model to not be complete gibberish definitely don’t count, and the fact all live service LLMs employ at least a few dozen third world workers to check results and change outputs disagree with your under powered rng.
The mechanical turk. Like the mechanical turk LLMs have the same flaw, it’s really just a human behind the best implementations.
I’ve checked my PC looking for the tiny man managing my local llm, no luck yet but perhaps they’re smaller than I thought…
That’s nice dear, the amount of human hours tuning your model to not be complete gibberish definitely don’t count, and the fact all live service LLMs employ at least a few dozen third world workers to check results and change outputs disagree with your under powered rng.
Is ai learning training data it got from itself?
Uh… No.