We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

  • Turun@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    It has no fundamental grasp of concepts like truth, it just repeats words that simulate human responses. It’s glorified autocomplete that yields impressive results

    Way to call me out man! I’m just doing my best, ok?

    Jokes aside, while I don’t agree with your position I can understand your reasoning and the motivation for separating agency and the description of actions, e.g. it lied vs its answer contained a lie.