BANANDRE
NO ONE CARES ABOUT CODE

Navigation

HomeCategories

Categories

Artificial Intelligence(406)
Software Development(213)
Software Architecture(190)
Data Engineering(110)
Engineering Management(56)
Enterprise Architecture(35)
Product Management(27)
tech(1)

Tagged with

#llm-security

2 articles found

Adversarial Poetry: The New Frontier in AI Jailbreaking
adversarial-attacks
Featured

Adversarial Poetry: The New Frontier in AI Jailbreaking

Researchers demonstrate that poetic language structures can successfully jailbreak large language models with a 62% success rate, revealing a systemic vulnerability across model families and safety training methods.

#adversarial-attacks#ai-safety#jailbreaking...
Read More
The ‘Sure’ Trap: How a Single Word Creates a Stealthy LLM Backdoor
ai-alignment

The ‘Sure’ Trap: How a Single Word Creates a Stealthy LLM Backdoor

A new LLM backdoor technique uses the word ‘Sure’ as a trigger, creating a compliance-only attack that requires no malicious training data and bypasses conventional safety measures.

#ai-alignment#backdoor-attacks#data-poisoning...
Read More
BANANDRE
NO ONE CARES ABOUT CODE

Connect

2026 BANANDRE
Privacy PolicyTermsImpressum
Built with 🍌