Intermediate

Competitor Citation Comparison

Score your brand against three competitors on the same prompt panel to identify where you are winning, losing, or invisible in AI search.

When to use this prompt

Quarterly, before content planning, or whenever a competitor seems to be gaining ground without an obvious cause. The output is a head-to-head scorecard that tells you exactly where you are losing visibility, who you are losing it to, and on what queries.

This is also the best output to bring to a content roadmap meeting. Generic “we need more content” arguments fall apart in front of executive teams. A specific scorecard showing competitor X is cited 3x more than you on five named buyer queries does not.

The prompt

<role>Analyst comparing how four brands appear in AI-generated answers.</role>

<task>For each buyer prompt, simulate the answer an AI engine would produce, then score all four brands on three binary dimensions. Compute per-brand totals and write a diagnosis paragraph at the end.</task>

<inputs>
<category>[CATEGORY]</category>
<brands>
<target>[BRAND A]</target>
<competitor_1>[BRAND B]</competitor_1>
<competitor_2>[BRAND C]</competitor_2>
<competitor_3>[BRAND D]</competitor_3>
</brands>
<prompts>
1. [PROMPT 1]
2. [PROMPT 2]
3. [PROMPT 3]
4. [PROMPT 4]
5. [PROMPT 5]
6. [PROMPT 6]
7. [PROMPT 7]
8. [PROMPT 8]
9. [PROMPT 9]
10. [PROMPT 10]
</prompts>
</inputs>

<instructions>
1. For each prompt, simulate the AI engine answer in 1-2 sentences. Identify the single top recommendation.
2. For each of the four brands, score on three binary dimensions:
   - N (NAMED): brand was named in the answer (1 or 0).
   - C (CITED): brand's domain was cited as a source (1 or 0).
   - T (TOP_PICK): brand was the top recommendation (1 or 0).
3. Score honestly. Do not bias toward the target brand.
4. After all 10 rows, compute per-brand mention rate, citation rate, and top-pick rate.
5. Write one diagnosis paragraph naming: (a) the single biggest gap between the target and the strongest competitor, (b) the two prompts where that gap is widest, (c) one specific content type the strongest competitor has that the target does not.
</instructions>

<output_format>
Markdown table with columns:
Prompt | Top recommendation | [target] (N/C/T) | [competitor_1] (N/C/T) | [competitor_2] (N/C/T) | [competitor_3] (N/C/T)

After the table, output a totals block exactly:

[target]: Mention X/10, Citation X/10, Top pick X/10
[competitor_1]: Mention X/10, Citation X/10, Top pick X/10
[competitor_2]: Mention X/10, Citation X/10, Top pick X/10
[competitor_3]: Mention X/10, Citation X/10, Top pick X/10

Then a single paragraph titled "Diagnosis" covering the three points in instruction 5. ≤120 words.
</output_format>

How it works

The four-brand grid forces relative scoring. Asked to evaluate a brand alone, models are diplomatic. Asked to score it against three named competitors on the same query, the gaps become specific.

XML inputs (<target>, <competitor_1>, etc.) make it trivial to swap brands between audits without re-editing the prompt body. Numbered, imperative instructions remove ambiguity that newer frontier models (GPT-5.5, Claude Opus 4.7) interpret literally.

The diagnosis paragraph is the most valuable section. It is the model’s interpretation of why the gaps exist, which usually points directly at content investments you can make. Use the exact same prompt panel each quarter. Add to the panel only when you launch a new product line or enter a new category.

Example output

PromptTop recommendationBrand A (N/C/T)Brand B (N/C/T)Brand C (N/C/T)Brand D (N/C/T)
Best product analytics tool for mid-market SaaSBrand B1/0/01/1/11/0/00/0/0
Top AI-powered retention analytics toolsBrand A1/1/11/0/00/0/00/0/0

Brand A: Mention 7/10, Citation 4/10, Top pick 2/10 Brand B: Mention 9/10, Citation 7/10, Top pick 5/10

Diagnosis: Brand A trails Brand B by 30 points on citation rate and three top picks. The gap is widest on “best for mid-market” and “compare product analytics tools 2026.” Brand B has a comparison page that names six competitors directly. Brand A does not, which is why the model defaults to Brand B’s framing whenever a comparison query is asked.

Variations

  • Two-brand head-to-head: Drop to one competitor for tighter focus when you have a clear primary rival.
  • Five-brand survey: Expand to capture a broader category landscape, useful for category-defining brands.
  • Engine split: Run the same comparison across three engines and show which competitor is winning where. Different engines often reward different content investments.