Not with this framing. By adopting the first- and second-person pronouns immediately, the simulation is collapsed into a simple Turing-test scenario, and the computer’s only personality objective (in terms of what was optimized during RLHF) is to excel at that Turing test. The given personalities are all roles performed by a single underlying actor.
As the saying goes, the best evidence for the shape-rotator/wordcel dichotomy is that techbros are terrible at words.
NSFW
The way to fix this is to embed the entire conversation into the simulation with third-person framing, as if it were a story, log, or transcript. This means that a personality would be simulated not by an actor in a Turing test, but directly by the token-predictor. In terms of narrative, it means strictly defining and enforcing a fourth wall. We can see elements of this in fine-tuning of many GPTs for RAG or conversation, but such fine-tuning only defines formatted acting rather than personality simulation.
I tell mine to use a particular tone. I say to use the tone of a five-minute daily briefing between an executive assistant and her boss, and the two have worked together over ten years.
I haven’t experimented with other tones, but the concept above seems to work well for the overall thing I expect from a chatbot like that: I consider it a helper and research mole and tutor.
Hah, still worked for me. I enjoy the peek at how they structure the original prompt. Wonder if there’s a way to define a personality.
Considering how Altman is, I don’t think they’ve cracked that problem yet.
Not with this framing. By adopting the first- and second-person pronouns immediately, the simulation is collapsed into a simple Turing-test scenario, and the computer’s only personality objective (in terms of what was optimized during RLHF) is to excel at that Turing test. The given personalities are all roles performed by a single underlying actor.
As the saying goes, the best evidence for the shape-rotator/wordcel dichotomy is that techbros are terrible at words.
NSFW
The way to fix this is to embed the entire conversation into the simulation with third-person framing, as if it were a story, log, or transcript. This means that a personality would be simulated not by an actor in a Turing test, but directly by the token-predictor. In terms of narrative, it means strictly defining and enforcing a fourth wall. We can see elements of this in fine-tuning of many GPTs for RAG or conversation, but such fine-tuning only defines formatted acting rather than personality simulation.
I tell mine to use a particular tone. I say to use the tone of a five-minute daily briefing between an executive assistant and her boss, and the two have worked together over ten years.
I haven’t experimented with other tones, but the concept above seems to work well for the overall thing I expect from a chatbot like that: I consider it a helper and research mole and tutor.