Everyone talks about prompt engineering like its some kind of dark art. But honestly? Most of it comes down to a few patterns that work consistently. Let me share what I've learned from building AI features in production.
The Problem with Most Prompt Guides
Most guides tell you to "be specific" and "provide context." Thats not wrong, but its not particularly helpful either. Its like telling someone to "write good code" - technically correct but pretty useless in practice.
What you actually need are patterns. Repeatable structures that work across different use cases. Stuff you can copy-paste and adapt.
How Prompts Flow Through the System
Before we dive into patterns, lets visualize how a prompt actually works:
The model reads your prompt left-to-right, building up context. Thats why the order matters - persona first sets the tone, then task gives direction, then format constrains the output.
Pattern 1: The Persona + Task + Format Trinity
This is the most reliable pattern I've found. Every good prompt has three components:
Here's a real example:
const systemPrompt = `You are a senior TypeScript developer who specializes in React and Next.js.
Your task is to review the provided code for potential bugs, performance issues, and TypeScript best practices. Focus only on critical issues that could cause runtime errors or significant performance degradation.
Format your response as a JSON array of issues:
[
{
"severity": "critical" | "warning",
"line": number,
"issue": "brief description",
"suggestion": "how to fix"
}
]
If no issues found, return an empty array.`;
The key insight here? Constraints make LLMs better, not worse. When you tell the model exactly what format you expect, it stops guessing and starts delivering consistent results.
Pattern 2: Few-Shot Examples Beat Long Explanations
Instead of writing paragraphs explaining what you want, just show the model examples. This works way better then you'd expect.
const prompt = `Convert natural language to SQL queries.
Examples:
User: "Show me all users who signed up last month"
SQL: SELECT * FROM users WHERE created_at >= DATE_SUB(NOW(), INTERVAL 1 MONTH)
User: "Count orders by status"
SQL: SELECT status, COUNT(*) as count FROM orders GROUP BY status
User: "Find products with no sales"
SQL: SELECT p.* FROM products p LEFT JOIN order_items oi ON p.id = oi.product_id WHERE oi.id IS NULL
Now convert this:
User: "${userQuery}"
SQL:`;
Three examples is usually enough. More then five and you're wasting tokens. The examples should cover different patterns - simple queries, aggregations, joins. The model picks up on the pattern real quick.
Pattern 3: Chain of Thought for Complex Tasks
When you need the model to reason through something, ask it to think step by step. But be specific about what steps you want - dont just say "think step by step" and hope for the best.
const analysisPrompt = `Analyze this error log and identify the root cause.
Think through this systematically:
1. First, identify the error type and message
2. Then, trace the stack to find the origin
3. Look for any patterns or repeated failures
4. Consider common causes for this error type
5. Propose the most likely root cause
Error log:
${errorLog}
Analysis:`;
The numbered steps force the model to be methodical. Without them, it might jump to conclusions or miss important details. I learned this the hard way after deploying a bug analyzer that kept blaming everything on "network issues."
Pattern 4: Output Anchoring
This ones kinda sneaky but super effective. Start the model's response for it:
const response = await llm.complete({
prompt: `Extract the following information from this receipt:
- Store name
- Date
- Total amount
- Items purchased
Receipt text:
${receiptText}
Extracted information:
{
"store":`,
});
// The model will continue from where you left off
const fullJson = `{"store":${response}`;
By starting the JSON for the model, you've anchored it to that format. It will almost always continue in valid JSON. No more "Here's the extracted information:" followed by random formatting.
Pattern 5: Negative Instructions Work
Tell the model what NOT to do. This is surprisingly effective at preventing common failure modes.
const prompt = `Summarize this technical document for a developer audience.
Important:
- Do NOT include marketing language or buzzwords
- Do NOT explain basic programming concepts
- Do NOT add information not present in the document
- Do NOT use phrases like "in conclusion" or "to summarize"
Keep it under 200 words.
Document:
${document}`;
LLMs have seen alot of fluffy content in training. Negative instructions help them avoid falling into those patterns. Trust me, without these constraints you'll get "In today's fast-paced digital landscape..." type nonsense.
Temperature Settings That Actually Matter
Most developers either ignore these or overthink them. Here's the simple version:
// For code review - we want consistent, focused output
const codeReviewConfig = {
temperature: 0.2,
maxTokens: 1000,
};
// For generating test cases - we want variety
const testGenConfig = {
temperature: 0.7,
maxTokens: 2000,
};
Lower temperature = more deterministic. Higher = more creative. Thats really all you need to know.
Common Mistakes I See All The Time
1. Being too vague about format
Bad: "Return the data in a structured format" Good: "Return as JSON with keys: name (string), age (number), active (boolean)"
2. Not handling edge cases in the prompt
Bad: "Extract the email from this text" Good: "Extract the email from this text. If no email found, return null. If multiple emails found, return only the first one."
3. Ignoring the system prompt
The system prompt sets the tone for the entire conversation. Use it. A good system prompt can reduce the length of every subsequent user prompt.
4. Not testing with weird inputs
Your users will send weird stuff. Test with empty strings, very long inputs, inputs in different languages, and inputs that try to override your instructions. You'd be surprised what breaks.
Wrapping Up
Prompt engineering isnt magic. Its about:
- Being specific about what you want
- Showing examples when possible
- Constraining the output format
- Testing with real (and weird) inputs
The patterns I've shared work in production. They've helped me build features that actually ship. Start with these, and youll be ahead of most developers who are still writing "please help me with..." prompts.
The best prompt is often the simplest one that reliably gets the job done. Don't overcomplicate it.
