We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.
Whether or not it was acting human (and whether or not it was designed to), it still cheated and deceived. With the potential power, influence, and widespread adoption this technology could have, shouldn’t we be concerned about that? At the very least, isn’t this a poorly programmed tool not ready for GA?
My dog isn’t intentionally being a prick when he eats my sandwich off the table before I can get to it, but it’s still a behavior I condemn and would want to train out of him before letting him go to other people’s houses.