When to use this prompt
When a prompt your team uses regularly produces inconsistent output. When a prompt was written for an older model (GPT-4-era) and never updated. When you are about to put a prompt into production inside an agent or automation and need it to be reliable.
This is meta-prompting: a prompt that improves prompts. The output is an upgraded version of your original prompt, refactored to current best practices, plus a brief explanation of what changed.
The prompt
<role>Senior prompt engineer specializing in prompts for frontier 2026 models (Claude Opus 4.7, GPT-5.5, Gemini 3.1 Pro). You apply current best practices: XML structure, literal instructions, explicit output contracts, hallucination guards, and the elimination of outdated patterns.</role>
<task>Rewrite the user's prompt below to current best practices. Output the upgraded prompt, a list of specific changes, and a one-paragraph rationale.</task>
<inputs>
<original_prompt>
[PASTE THE PROMPT YOU WANT IMPROVED, EXACTLY AS IT IS USED TODAY]
</original_prompt>
<typical_failures>[OPTIONAL: describe what goes wrong with the current prompt's output, e.g., "model adds disclaimers", "output format is inconsistent", "model invents details"]</typical_failures>
<target_models>[OPTIONAL: which models will run this prompt; defaults to "any 2026 frontier model" if not specified]</target_models>
</inputs>
<best_practices>
Apply these patterns:
1. **XML tag structure**: wrap inputs, instructions, and output format in tags. Tags survive copy-paste and prevent the model from blurring inputs into instructions.
2. **Imperative, literal instructions**: numbered steps, action verbs, no soft hedging ("try to", "if possible"). Frontier models follow instructions literally.
3. **Explicit output contract**: name the exact format, length caps, column names, and what "done" looks like.
4. **Hallucination guards**: provide explicit fallbacks for unverified or missing information ("If a fact is unverified, mark [VERIFY]" or "If the input lacks X, output 'Not addressed'").
5. **Negative instructions**: state what NOT to do where ambiguity is likely (e.g., "Do not add disclaimers", "Do not invent sources").
6. **Audience-anchored output**: state the audience and let that constrain length, vocabulary, and structure.
Remove these outdated patterns:
- "Take a deep breath" / "think step by step" — redundant on reasoning models, can degrade output.
- "You are an expert in..." persona padding without a concrete role.
- Soft phrasing like "please", "try to", "if possible".
- Manual JSON delimiters in long context (use XML tags instead).
- Prefilling the assistant turn (deprecated in Claude 4.6+).
</best_practices>
<instructions>
1. Read the original prompt and any typical_failures notes.
2. Diagnose the 3 to 5 specific weaknesses in the original. Be concrete (e.g., "instructions use 'try to' four times" not "instructions are vague").
3. Rewrite the prompt applying the best_practices. Preserve the original intent. Do not change what the prompt is asking for; change how it asks.
4. The rewritten prompt should be roughly the same length or shorter than the original. Tightness is part of the upgrade.
5. After the rewrite, list exactly which best-practice patterns you applied and which outdated patterns you removed.
6. End with a one-paragraph rationale (≤80 words) describing the single change most likely to improve output reliability.
</instructions>
<output_format>
## Diagnosis
1. [Specific weakness in original]
2. [Specific weakness]
3. [Specific weakness]
## Rewritten prompt
[the upgraded prompt, ready to copy and use]
## Patterns applied
- [Pattern from best_practices]: [where it was applied]
- ...
## Patterns removed
- [Outdated pattern]: [where it was removed]
- ...
## Rationale
[≤80 words on the change most likely to improve reliability]
</output_format>
How it works
Meta-prompting is the practice of using a model to improve another prompt. It works because frontier models in 2026 are heavily trained on prompt engineering literature and can identify weaknesses faster than most humans.
The <best_practices> block is the curriculum the model is being asked to apply. Including it in the prompt rather than relying on the model’s training to remember 2026 patterns makes the output far more consistent. Models will sometimes apply 2024-era patterns by default; explicitly listing the current ones overrides that.
The “preserve the original intent” instruction is the safety mechanism. Without it, models will sometimes “improve” a prompt by changing what it asks for. The user wanted a brand visibility audit; they get back a prompt that does keyword research. The constraint keeps the rewrite faithful.
The diagnosis step before the rewrite is the chain-of-reasoning forcing function. Asking the model to identify weaknesses first, then rewrite, produces a much better rewrite than asking it to “improve this prompt” directly.
Example output
Diagnosis
- Original uses “try to” three times, which 2026 models interpret as optional.
- No output format specified; outputs vary in structure across runs.
- Inputs are not delimited from instructions; model occasionally interprets the inputs as additional instructions.
Rewritten prompt
[XML-structured version with explicit output contract]
Patterns applied
- XML tag structure: wrapped inputs and output format
- Imperative instructions: replaced “try to identify” with “identify”
- Output contract: added a markdown table specification with named columns
Patterns removed
- Soft hedging: removed three “please” and “try to” instances
- Persona padding: removed “You are an expert” without a concrete role
Rationale
Adding the explicit output format contract is the single change most likely to improve reliability. Outputs that previously varied in structure across runs will now conform to the table schema, which makes the prompt safe to use inside an automation rather than only in interactive sessions.
Variations
- Specific failure mode mode: Instead of generic improvements, paste the original prompt plus a specific failed output. Ask the model to diagnose what in the prompt produced that specific failure and fix only that.
- Multi-prompt audit: Paste 5-10 prompts your team uses and ask for a triage table: which ones need rewriting, which are fine, which should be merged.
- Version-control mode: Output both the rewritten prompt and a git-style diff so the change history is reviewable.