| title |
From Polyglots to Prompt Injections: Parsing Is Still Execution (And Your LLM Didn't Get the Memo) |
| date |
2025-10 |
| authors |
|
| conference |
|
| resources |
|
This presentation explores the security implications of parsing in modern AI systems, drawing parallels between traditional file polyglot attacks and prompt injection vulnerabilities in Large Language Models. The talk bridges decades of language-theoretic security research with emerging threats in the AI space, demonstrating that the fundamental challenges of parsing and input validation remain relevant as systems evolve.