LLMs Often Know When They’re Being Evaluated: “Nobody has a good plan for what to do when the models constantly say ‘This is an eval testing for X. Let’s say what the developers want to hear.'”
Paper: https://www.arxiv.org/abs/2505.23836 submitted by /u/MetaKnowing [link] [comments]









