Prompt Engineering
The practice of crafting and refining the inputs you give an AI model to get more accurate, useful, or consistent outputs.
Prompt engineering is the skill of writing instructions, context, and examples that reliably produce the output you want from a language model. The same model can produce dramatically different results depending on how you phrase your request.
Core techniques: 1. Be specific: "Write a 3-sentence product description for noise-cancelling headphones targeting remote workers" beats "Describe headphones". 2. Provide examples: Few-shot prompting (showing the model 2–3 examples of input/output pairs) dramatically improves consistency for structured tasks. 3. Assign a role: "You are an experienced tax accountant. Explain..." activates relevant knowledge. 4. Use a system prompt: Set persistent context and constraints at the start of a conversation. 5. Chain of thought: Ask the model to "think step by step" before answering complex questions — it improves reasoning.
Why it matters for builders: Better prompts mean fewer API calls, lower costs, and more reliable outputs. A well-engineered prompt can often substitute for fine-tuning on smaller tasks.
Example
Weak prompt: "Summarise this article." Strong prompt: "Summarise this article in 3 bullet points for a non-technical reader. Focus on business implications, not technical details. Each bullet must be under 20 words."
Related terms
Prompt
The text instruction you send to an AI model asking it to do something.
System Prompt
A special instruction given to an AI model that defines its behavior and personality for all subsequent user interactions.
Few-Shot Learning
Teaching a model to perform a task by providing a small number of examples (usually 2–10) in the prompt.
Chain-of-Thought Prompting
A prompting technique that asks an AI to explain its reasoning step-by-step before giving a final answer.
Fine-Tuning
The process of updating a pre-trained model with task-specific or domain-specific data to improve performance.