Not really, it’s super fucking expensive to train one of these, on-line training would simply not be economically feasible.
Even if it was, the models don’t really have any agency. You prompt, they respond. There’s not much prompting going on from the model, and if there was, you can choose to not respond, which the model can’t really do.
We know how they work, otherwise we couldn’t design and implement them. What we don’t really know, and we don’t really have to know is the exact parameters the model trains to.
The issue you’re thinking of is that any one parameter does not necessarily map to one aspect, but they are a coherent collection that makes the whole work. Some interesting insights can be gleaned from trying to figure out these relationships, but due to the massive amount of parameters (billions!) it gets a little much to get your head around.