Prompt Injection
An attack where external content hijacks an agent into acting against its instructions.
Why it matters
When agents process external data, malicious content can override instructions. Like SQL injection for AI.
In practice
We protect with input sanitization on all routes, content filtering in Chat Agent, and Claude Code's built-in detection.
Related terms
Guardrails
Rules and constraints that prevent an agent from taking harmful or unauthorized actions.
ISO 27001
International information security standard defining requirements for an information security management system.
Hallucination
AI-generated information that sounds plausible but is factually incorrect.