Intermediate

Comparison Page Outline Generator

Generate the structure for a 'X vs Y' or 'best X' comparison page that ranks in Google, gets cited in AI Overviews, and converts buyers in a comparison stage.

When to use this prompt

When a category-buyer query (e.g. “Tool A vs Tool B” or “best product analytics software”) triggers an AI Overview or appears in citation share audits, and you do not have a comparison page for it yet. Comparison pages are one of the most cited page types in AI search because they directly answer the question buyers actually ask.

This prompt produces an outline, not the finished content. The outline is what you hand to a writer or use as your own draft scaffolding. The structure is what matters most for citation eligibility.

The prompt

<role>Content strategist who specializes in comparison and "best of" pages built to rank in Google and earn citations in AI search engines.</role>

<task>Build a complete outline for a comparison page on the topic below. Output a markdown document with all 8 required sections. The full page, when written, must be 1500 to 2200 words.</task>

<inputs>
<topic>[INSERT TOPIC, e.g. "Mixpanel vs Amplitude" or "best product analytics software for mid-market"]</topic>
<target_buyer>[INSERT BUYER PERSONA]</target_buyer>
</inputs>

<instructions>
The outline must contain these 8 sections, in order:

1. **H1** that includes the comparison terms verbatim and is under 65 characters.
2. **Direct answer paragraph**: 60 to 80 words, names a specific recommendation. Do not write "it depends" or hedge.
3. **Comparison table**: 8 to 12 rows. Required criteria: pricing model, primary audience, key differentiator, integrations, implementation time. Add other criteria buyers in this category actually care about.
4. **"When to choose [Option A]"** with three bullet conditions, each starting with a verb.
5. **"When to choose [Option B]"** with three bullet conditions, each starting with a verb.
6. **"How they overlap"** as a single paragraph naming what is genuinely similar.
7. **FAQ section** with exactly 5 questions and 2-4 sentence answers (50-90 words each).
8. **"How we evaluated"** as a paragraph explaining the methodology. This is required for trust and citation.

Constraints:
- Use specific facts, numbers, integrations, and named entities throughout. Replace generic adjectives ("powerful", "robust", "seamless") with concrete claims.
- Do not invent product features that may not exist. If a fact is unverified, mark it [VERIFY] in brackets so the human writer can check it.
- Provide table headers and 3-5 example rows; the writer will complete the rest.
</instructions>

<output_format>
Markdown document with:
- One H1 at the top
- Section H2s for sections 2 through 8
- A working markdown table with headers and 3-5 sample rows for section 3
- Paragraph stubs (1-2 sentences each) for sections 2 and 6 and 8
- Bullet placeholders for sections 4 and 5
- 5 question/answer pairs for section 7

Mark any unverified facts as [VERIFY]. Done.
</output_format>

How it works

The 60 to 80 word direct answer paragraph at the top is the single most important section. AI Overviews and answer engines reach for that paragraph first when a comparison query is asked. If it is missing or hedges with “it depends,” your page is not eligible for the citation.

The [VERIFY] flag is a 2025 hallucination control. Frontier models will fabricate plausible-sounding product details (pricing tiers, integration names, implementation times) unless you give them an explicit way to surface uncertainty. Marking unverified claims keeps the page accurate when your writer fills it out.

The comparison table works for two reasons. First, the table is structurally extractable, which is exactly what retrieval systems prefer. Second, the table forces specific claims on each criterion, which builds the page’s authority on the topic.

The “How we evaluated” section is the trust signal most comparison pages skip. AI engines weight pages with explicit methodology more heavily because the methodology lets the engine assess credibility.

Example output (excerpt)

# Mixpanel vs Amplitude: A Mid-Market Comparison

## Direct answer

For mid-market SaaS teams of 25 to 200, Amplitude is the better fit if cohort and retention analysis is the primary use case, and Mixpanel is the better fit if the team needs flexible event tracking and a faster implementation. Both serve mid-market well; the choice comes down to whether your priority is built-in retention analytics (Amplitude) or event flexibility (Mixpanel).

## Comparison table

| Criterion | Mixpanel | Amplitude |
|-----------|----------|-----------|
| Pricing model | Tiered annual plan with published bands | Custom annual plan |
| Primary audience | Mid-market and enterprise | Mid-market and enterprise |
| Key differentiator | Event tracking flexibility | Built-in cohort and retention depth |
| Implementation time | 14 to 45 days | 30 to 90 days |
| Native integrations | Segment, Snowflake, BigQuery, Slack | Segment, Snowflake, BigQuery, mParticle |
... (continue)

## When to choose Mixpanel

- You need flexible custom event tracking and self-serve querying
- You want a faster implementation and lower setup overhead
- You prioritize event-flow analysis over deep retention modeling

## When to choose Amplitude
... (continue)

Variations

  • “Best of” version: Adapt the prompt to handle 4 to 6 options instead of 2. The table grows; the “when to choose” sections become per-option.
  • B2C comparison: Drop “implementation time” and “integrations” from the table; add “free tier,” “support quality,” and “ease of setup.”
  • Update version: For an existing comparison page, add a constraint that the outline must reflect product changes in the last 12 months. Useful for keeping pages competitive after major releases.