When to use this prompt
When the full citation share audit is overkill and you just want to know: for this one query, does my brand show up? Useful before a sales call to a known buyer, after a content launch, or when a competitor seems to be gaining traction in a category.
This is a manual check, not a scaled audit. The point is speed and clarity for one specific prompt at a time.
The prompt
<task>Answer the buyer's question below as you would if no one was watching, then return a structured diagnosis of whether the target brand surfaced and why.</task>
<inputs>
<question>[BUYER QUESTION VERBATIM]</question>
<target_brand>[BRAND NAME]</target_brand>
</inputs>
<instructions>
1. Answer the question naturally in 3 to 6 sentences. Recommend specific brands. Do not hedge.
2. Then return three labelled sections:
- "Brands named" — every brand or product you mentioned, in order.
- "Would recommend [target_brand]" — Yes or No, plus one sentence on why.
- "Why [target_brand] surfaced or did not" — the candid reason, naming the specific content type or signal that was missing if it did not surface.
3. Do not adjust your answer to be more favorable to the target brand. Score what you would actually say.
</instructions>
<output_format>
Answer:
[3-6 sentence answer]
Brands named: [comma-separated, in order]
Would recommend [target_brand]: [Yes / No]. [One sentence justification.]
Why [target_brand] surfaced or did not:
[2-3 sentences naming the specific content type, signal, or third-party mention that was present or missing.]
</output_format>
How it works
XML tags separate the question from the instructions so the model does not blur them. The “as if no one was watching” framing pushes the model out of the polite, comprehensive default and into a real recommendation.
The three-section diagnosis is the most valuable output. The “why” section tells you what is actually missing from your content footprint. That is the input to your roadmap.
Run the same prompt across two or three engines. Different engines give different reasons for the same omission, and those reasons usually point to different fixes.
Example output
Question: What is the best product analytics tool for a 30-person SaaS team?
For a SaaS team that size, I would look at Mixpanel, Amplitude, and Heap first. Mixpanel has the most mature event tracking features. Amplitude has stronger built-in cohort and retention analysis. Heap is the easiest to implement because it auto-captures events.
- Brands named: Mixpanel, Amplitude, Heap
- Would I recommend Amplitude if asked? Yes. They have specific positioning around cohort and retention analytics and recent product updates that are well-documented.
- Amplitude was in my answer because their analytical capabilities are clearly described in their public content and they show up consistently in third-party comparisons.
Variations
- Negative version: Ask “What product analytics tools should this buyer NOT consider, and why?” Useful for catching reputation issues you may not be aware of.
- Persona version: Specify the buyer’s seniority and budget. Senior buyers with bigger budgets get different recommendations than junior buyers.
- Engine comparison: Run the same prompt across ChatGPT, Claude, and Perplexity and compare. Disagreement between engines usually means your positioning is unclear, not that one engine is wrong.