Adversarial Poetry: The New Frontier in AI Jailbreaking
Researchers demonstrate that poetic language structures can successfully jailbreak large language models with a 62% success rate, revealing a systemic vulnerability across model families and safety training methods.