When to use this prompt
Before publishing a pillar page, service page, or any page you expect to compete for high-intent queries. Also useful as a quarterly audit on pages that used to rank but have decayed.
E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) is how Google’s quality raters score content, and the dimensions retrieval systems use to decide which pages deserve to be cited. This prompt scores all four explicitly so you can see exactly where a page is strong and where it needs work.
The prompt
<role>SEO auditor specializing in E-E-A-T signal evaluation for pages competing in AI search and Google.</role>
<task>Score the page below on the four E-E-A-T dimensions. For each dimension, rate 1 to 5, name the specific signals present, and name the single highest-leverage signal still missing.</task>
<inputs>
<page_url>[URL]</page_url>
<page_topic>[TOPIC]</page_topic>
<author_named>[YES/NO + name if yes]</author_named>
<page_content>
[PASTE FULL PAGE TEXT, INCLUDING AUTHOR BIO, DATES, AND ANY VISIBLE CITATIONS OR LINKS]
</page_content>
</inputs>
<dimensions>
- Experience: first-hand experience the author has with the subject. Signals include "I tested," "we ran," personal anecdotes, screenshots, original data, named clients, named projects.
- Expertise: subject-matter knowledge. Signals include credentials, named work history, named certifications, technical depth in the writing, accurate use of category vocabulary.
- Authoritativeness: third-party validation that the author or brand is recognized in the field. Signals include external citations to/from named sources, named publications, conference speaking, named clients, schema author entity with sameAs links to LinkedIn/Wikipedia/etc.
- Trustworthiness: signals that the page is honest and complete. Signals include explicit "last updated" dates, sources for every quantitative claim, contact information visible from the page, transparent methodology where relevant, no broken or misleading links.
</dimensions>
<instructions>
1. For each of the four dimensions, score 1 to 5 (1 = absent, 5 = best-in-class).
2. For each dimension, list the specific signals present on the page, naming exact phrases or elements where possible.
3. For each dimension, name the single highest-leverage signal still missing. Be specific (e.g., "no explicit 'last updated' date in the visible content" not "could improve trust").
4. Do not invent signals that are not in the page content. If a dimension is bare, score it 1 and say so.
5. End with a single recommendation: the one change that would most increase the page's eligibility for citation.
</instructions>
<output_format>
| Dimension | Score | Signals present | Highest-leverage missing signal |
|-----------|-------|-----------------|--------------------------------|
| Experience | N/5 | ... | ... |
| Expertise | N/5 | ... | ... |
| Authoritativeness | N/5 | ... | ... |
| Trustworthiness | N/5 | ... | ... |
**Top recommendation:** [the one change with the most impact, named specifically]
</output_format>
How it works
The four E-E-A-T dimensions map directly to how Google’s quality rater guidelines describe content evaluation, and the patterns retrieval systems use to decide which pages are citable. Scoring all four separately surfaces specific gaps: a page can be strong on Expertise but weak on Trust, and the fix for each is different.
The “name exact phrases or elements” instruction is a 2025 directive frontier models follow literally. Without it, you get vague summaries (“the author seems credible”) rather than specific findings (“author bio links to LinkedIn but not to a personal website with a resume”).
The “do not invent signals” guardrail is critical because models will otherwise pad scores to look helpful. The explicit prohibition keeps the audit honest.
Example output
| Dimension | Score | Signals present | Highest-leverage missing signal |
|---|---|---|---|
| Experience | 2/5 | Generic “we have years of experience” claim in bio | No first-person screenshots, no original data, no named client examples |
| Expertise | 4/5 | Author credentials listed; correct technical vocabulary; accurate citation of methodology | No links from external authoritative sources to author |
| Authoritativeness | 3/5 | Author bio with LinkedIn link; one external citation in body | No Person schema with sameAs links to professional profiles |
| Trustworthiness | 3/5 | Visible publish date; contact email in footer | No “last updated” date despite content being 2 years old |
Top recommendation: Add a “Last updated: [date]” line below the title and refresh the dataset cited in the second section. Trust dimension is the cheapest dimension to fix and the page is currently leaking citation eligibility on staleness alone.
Variations
- Multi-page audit: Run on five pillar pages in sequence and compare scores to identify which page is the weak link in your topic cluster.
- Pre-publish gate: Run on every draft before publish. Refuse to ship below a 12/20 combined score.
- Competitor benchmark: Run on your page and a competitor’s page on the same topic. The dimension where you score lowest relative to them is your priority.