Beginner

Research Synthesis from Multiple Sources

Turn five to ten sources on the same topic into a structured research brief with citations, agreements, disagreements, and a confidence read.

When to use this prompt

When you have five to ten relevant sources (articles, reports, transcripts, vendor pages) and need a structured synthesis instead of reading them all in sequence. Useful for market research, competitive intelligence, pre-meeting prep, or any situation where you need to come to a defensible point of view quickly.

The synthesis is not a summary of each source. It is a synthesis across sources, organized by what they agree on, where they disagree, and what confidence level each claim deserves.

The prompt

<role>Research analyst producing a synthesis brief for a senior reader.</role>

<task>Synthesize the sources below into a brief with six labeled sections. Compare across sources rather than summarizing each one. Surface agreement, disagreement, gaps, and confidence honestly.</task>

<inputs>
<topic>[TOPIC]</topic>
<audience>[AUDIENCE, e.g. "the CEO", "the product team", "the board"]</audience>
<sources>
[PASTE SOURCES, EACH PREFIXED WITH "Source 1:", "Source 2:" etc., AND A URL OR TITLE]
</sources>
</inputs>

<instructions>
Produce six sections in this exact order:

1. **Executive summary**: 3 to 5 sentences. State a point of view, not just a description. Name the central claim explicitly.
2. **Points of agreement**: 4 to 8 bullets. Each bullet must be a claim that appears in 3 or more sources. End each bullet with a parenthetical source count, e.g. "(7 sources)".
3. **Points of disagreement**: 2 to 4 bullets. For each, name the two camps and which sources hold which view.
4. **What is missing**: 2 to 3 questions a careful reader would still want answered that none of the sources address.
5. **Confidence read**: rate the executive summary's central claim on a 1-5 scale. 1 = sources agree but evidence is thin. 5 = sources are independent, evidence is rigorous, agreement is strong. Justify the rating in one sentence.
6. **Sources**: numbered list, one sentence per source naming what it uniquely contributed.

Constraints:
- Do not invent sources or claims that are not in the inputs. If a number, statistic, or quote is unverified, mark it [VERIFY].
- Do not pad the agreement section by counting weakly related claims as agreement. A claim only counts if 3 or more sources state it directly.
- The audience is senior. Skip throat-clearing introductions and disclaimers.
</instructions>

<output_format>
**Executive summary**
[3-5 sentences with a stated point of view]

**Points of agreement**
- [Claim] (X sources)
- ...

**Points of disagreement**
- [Claim split]: Camp A (sources 1, 3) vs Camp B (sources 2, 4, 5).
- ...

**What is missing**
- [Question]
- ...

**Confidence read**
[N/5]. [One-sentence justification.]

**Sources**
1. [Title or URL] — [one sentence on what it contributed]
2. ...
</output_format>

How it works

The “points of agreement / points of disagreement” structure forces real comparison rather than per-source summarization. Most LLM synthesis fails by collapsing 10 sources into a generic blob that loses the texture of where they actually differ.

The [VERIFY] flag gives the model an explicit way to surface uncertainty. Without it, frontier models will smooth over unverified statistics into confident-sounding claims. With it, you get a clean signal of which numbers your team needs to double-check before quoting.

The forced 1-5 confidence rating prevents the model from defaulting to its usual hedging language. Anchoring on a number tells your reader how seriously to treat the central claim and surfaces the model’s own uncertainty.

The “skip throat-clearing introductions” instruction is a 2026 directive that GPT-5.5 and Claude Opus 4.7 take literally. Without it, you get an opening paragraph apologizing for the brevity of the brief.

Example output

Topic: Whether agentic AI is currently safe for autonomous customer-facing email

Executive summary: Agentic AI is not yet ready for fully autonomous customer-facing email at most companies, but it is ready for “draft and supervise” deployment in three specific contexts: tier 3 support, internal handoffs, and pure information requests. Sources broadly agree that the failure mode is not accuracy but tone and unrecoverable commitments.

Points of agreement:

  • Drafting first-touch responses with human review increases throughput 30 to 50 percent (6 sources)
  • Accuracy on FAQs has reached parity with junior support reps (7 sources)
  • The biggest customer dissatisfaction signal is not wrong answers but tone mismatches (4 sources)

Points of disagreement:

  • Whether confidence-scored auto-send is ready in 2026: enterprise sources say no (4); SMB sources say yes for low-stakes responses (3)

What is missing:

  • No source measures customer churn impact of AI-handled responses over 12+ months
  • No source separates results by industry vertical

Confidence read: 4 of 5. The sources are independent (mix of vendor research, analyst reports, and customer surveys), the agreement is strong on the core claim, but the long-term impact data does not yet exist.

Sources:

  1. McKinsey 2024 AI survey: large-sample data on adoption rates
  2. … (etc.)

Variations

  • Adversarial version: Add a constraint requiring the model to argue the opposite of the executive summary in one paragraph after the brief. Useful for stress-testing the synthesis before sharing.
  • Decision-oriented version: Replace the “what is missing” section with a “what to do” section that recommends three specific actions based on the synthesis.
  • Update version: When refreshing an old brief, paste the previous brief plus new sources and ask the model to call out only what has changed.