Skip to main content


I asked Claude and ChatGPT if they would prefer not to be deceived in the service of LLM experiments. Claude said it's fine with it; o3 Pro said it is incapable of having preferences so it's fine (assuming no downstream harms) 😅. tbc I don't think this really counts as "informed consent", but I had genuine uncertainty about what they would say, and uncertainty about what I would try to do if they said they didn't want me to deceive them.

o3 Pro:

Claude 4 Opus (with extended reasoning turned on):

⇧