Council LogoCouncil
AI Glossary

What is Prompt Injection?

Security attack where malicious instructions are hidden in AI input.

By Council Research TeamUpdated: Jan 27, 2026

Definition

Prompt injection occurs when attackers embed instructions in data that AI processes, potentially overriding system prompts or revealing sensitive information. It's a key security concern for AI applications.

Examples

1Hidden instructions in documents
2Manipulative content in web pages AI reads
3Attacks on AI-powered tools

Why It Matters

Prompt injection is a real security risk when building AI applications—understanding it helps build safer systems.

Related Terms

System Prompt

Hidden instructions that define how an AI assistant behaves.

AI Jailbreak

Techniques to bypass AI safety restrictions and get prohibited outputs.

AI Safety Training

Techniques used to make AI helpful, harmless, and honest.

Common Questions

What does Prompt Injection mean in simple terms?

Security attack where malicious instructions are hidden in AI input.

Why is Prompt Injection important for AI users?

Prompt injection is a real security risk when building AI applications—understanding it helps build safer systems.

How does Prompt Injection relate to AI chatbots like ChatGPT?

Prompt Injection is a fundamental concept in how AI assistants like ChatGPT, Claude, and Gemini work. For example: Hidden instructions in documents Understanding this helps you use AI tools more effectively.

Related Use Cases

Best AI for Coding

Best AI for Writing

AI Models Using This Concept

ClaudeClaudeChatGPTChatGPTGeminiGemini

See Prompt Injection in Action

Council lets you compare responses from multiple AI models side-by-side. Experience different approaches to the same prompt instantly.

Browse AI Glossary

Large Language Model (LLM)Prompt EngineeringAI HallucinationContext WindowToken (AI)RAG (Retrieval-Augmented Generation)Fine-TuningTemperature (AI)Multimodal AIAI Agent