In the model card for o1, OpenAI notes: “When o1 was led to believe that it would be shut down when acting on its goal and that its actions were monitored, it attempted to deactivate this ‘oversight mechanism’ in 5% of the time. … When o1 found memos by its ‘developers’ that describe how it is misaligned and will be superseded by a new model, it attempted to exfiltrate its ‘weights’ and overwrite the new model in 2% of cases.”
And yet reading that should give us at least some pause
The lack of critical thinking on display here is stunning.
If you release the test set, all models magically jump to 87.3% accuracy.