Claude cites web content differently from ChatGPT in patterns that meaningfully affect GEO strategy. ChatGPT (grounded primarily in Bing search) cites from a broad range of sources including aggregators, content marketing pages, news, and authoritative publications. Claude (Anthropic's assistant with web search capability via Brave Search and other partners) cites more conservatively: it prefers primary sources, named-author content with verifiable credentials, peer-reviewed research, government and academic publications, and substantive editorial content from established publishers. The patterns derive from Anthropic's Constitutional AI approach which prioritizes helpfulness, honesty, and harm avoidance, creating a citation behavior more cautious than ChatGPT's. The implication for GEO: brands optimizing only for ChatGPT-style citation patterns leave Claude visibility unclaimed. Brands optimizing for Claude's preferred signals (named authorship, primary sources, credentialed expertise, substantive content depth) earn citation share that ChatGPT-only optimization does not produce. This Research piece examines the differences and offers a practical framework for GEO programs targeting Claude citation share.
Claude and ChatGPT Data Sources Compared
Different web grounding partners produce different source distributions.
ChatGPT.
- Primary search grounding: Bing
- Search results returned through OpenAI's wrapper which selects, summarizes, and cites
- ChatGPT has access to additional partner data sources for specific verticals (Wolfram for math and computation, Shopify for product data, news partner content for current events, Apple Maps for some location queries)
- The Bing grounding source is broad: ChatGPT cites both authoritative sources and lower-authority content marketing pages depending on the query
- ChatGPT historically has been comfortable citing competitor websites, listicle aggregators, and SEO-optimized content marketing pages alongside primary sources
Claude.
- Primary search grounding: Brave Search and partner sources (Anthropic has not always publicly disclosed full grounding partners; the public-facing Claude.ai product currently uses web search via Brave and supplements with proprietary sources)
- The Brave Search index has different coverage than Bing, particularly for niche, technical, and politically sensitive topics
- Claude's wrapper applies Constitutional AI principles to source selection, weighting toward sources Anthropic's safety classifiers consider authoritative and safe
- Claude cites a narrower, more authority-skewed set of sources than ChatGPT
Perplexity.
- Primary search grounding: Bing plus Perplexity's own crawl
- More aggressive on citation transparency than either ChatGPT or Claude (every claim is typically cited inline)
- Source selection skews toward primary sources but also cites aggregators readily
Microsoft Copilot.
- Primary search grounding: Bing
- Source selection similar to ChatGPT but with stronger weighting on Microsoft entity graph data and enterprise content where applicable
Google Gemini and AI Overviews.
- Primary search grounding: Google's index
- Source selection follows Google's E-E-A-T and ranking signals, with additional verification for YMYL queries
The key implication: optimizing for one engine's citation pattern is not equivalent to optimizing for all engines. Claude's narrower, authority-skewed citation set rewards different content patterns than ChatGPT's broader set.
Claude Source Selection Patterns
Observed patterns in Claude citations across thousands of test queries:
Sources Claude cites frequently:
- Government publications (.gov domains, regulatory agency publications, official statistics, government program documentation)
- Academic and research publications (.edu domains, peer-reviewed journals, university research center publications)
- Authoritative non-profit publications (Wikipedia, major foundation publications, recognized international organizations like WHO, UN agencies, World Bank)
- Established news publishers with strong editorial reputation (NYT, Washington Post, BBC, Reuters, Associated Press, FT, WSJ, The Atlantic, The New Yorker)
- Professional and industry publications with named editorial staff and verifiable credentials
- Primary sources (court filings, SEC filings, company press releases, scientific paper preprints, government datasets)
- Specialized industry publications with substantive editorial standards (Investment News for finance, JAMA for medicine, Stat News for healthcare, Education Week for education)
- Brand and product sites with substantive content, named authorship, and primary product information
Sources Claude cites less frequently than ChatGPT:
- Aggregator and listicle sites (NerdWallet, ValuePenguin, BestColleges, Niche, Forbes Advisor, etc.) get less Claude citation than ChatGPT citation, though they do appear
- SEO-optimized content marketing pages without named authorship or primary sources rarely earn Claude citation
- Affiliate-heavy comparison pages get lower Claude citation than ChatGPT citation
- Generic "best of" listicles without editorial staff named earn minimal Claude citation
- AI-generated content without human review is filtered more aggressively by Claude
Sources Claude rarely cites:
- Anonymous content from low-authority domains
- Content with obvious factual errors that conflict with authoritative sources
- Content from sites Claude's safety classifiers flag as low-quality or potentially harmful
- Content on topics where Anthropic's safety classifiers prefer professional consultation deferral
Disclosure caveat. Anthropic does not publicly document Claude's exact citation logic. The patterns above are observational, drawn from systematic testing across query categories and verified against Anthropic's published Constitutional AI documentation. Brands should treat these patterns as observed tendencies rather than documented algorithms.
Constitutional AI and Citation Conservatism
Anthropic's Constitutional AI approach affects citation behavior in specific ways.
Constitutional AI principles relevant to citations:
- Helpful: provide useful information that genuinely serves user needs
- Honest: avoid asserting confident claims that may be false; flag uncertainty appropriately
- Harmless: avoid generating content that could cause harm to users or third parties
How these principles affect citation:
Honesty produces conservatism. When Claude cannot verify a claim against authoritative sources, it defers, hedges, or refuses to assert. This translates to citing fewer sources but with higher confidence in the cited sources. ChatGPT's model produces confident summaries from a broader source set; Claude's model produces hedged or partial summaries when source quality is mixed.
Harm avoidance affects YMYL behavior. On health, legal, financial, and safety-critical topics, Claude consistently advises consulting professionals rather than acting on AI-generated information. When Claude does cite on these topics, it weights credentialed practitioner content (named doctors, lawyers, financial advisors with verifiable credentials) heavily over aggregator content.
Helpfulness produces depth over breadth. When Claude commits to citing, it tends toward depth: more substantive excerpts, more contextual framing, more cross-referencing of multiple authoritative sources on the same claim. The tradeoff is fewer total citations than ChatGPT typically produces.
The combined effect. Claude's citation pattern is more like a careful research editor than a confident summarizer. Sources earn citation when they meet the trio of helpfulness (substantively useful), honesty (verifiable claims), and harmlessness (not likely to cause downstream harm). Brands that publish content meeting all three criteria earn disproportionate Claude citation share; brands missing any of the three criteria earn less.
YMYL and Deferral Patterns
Claude defers to professional consultation on YMYL topics more frequently than ChatGPT.
Topics where Claude consistently defers:
- Medical diagnosis or treatment recommendations
- Legal advice on specific cases
- Financial advice on specific investment decisions
- Mental health crisis intervention
- Drug dosage and pharmaceutical questions
- Tax filing strategy on specific cases
- Immigration legal status questions
- Child-safety-related decisions
Behavior pattern. Claude provides general educational context, then explicitly recommends consulting a licensed professional for specific decisions. Citations within the educational context favor:
- Credentialed practitioner content (named MD, JD, CPA, CFP authors with verifiable credentials)
- Authoritative health publications (Mayo Clinic, Cleveland Clinic, NIH, CDC, WebMD with named medical reviewers)
- Authoritative legal publications (American Bar Association, state bar publications, named-attorney commentary)
- Authoritative financial publications (SEC, FINRA, IRS, Federal Reserve, named-CFP and CFA author content)
Implication for brands in YMYL categories. Claude citation share on YMYL topics is achievable but requires specific infrastructure: named credentialed authors with Person schema linked to verifiable third-party credential records, primary-source citations to regulatory and academic sources, transparent disclosure of editorial review processes, and explicit deferral language where appropriate (the brand's content recommending professional consultation, not asserting AI-replaceable certainty). The pattern follows what we cover in the GEO playbook for healthcare and YMYL sites and the GEO playbook for financial advisors.
What Claude Cites on Different Query Types
Claude's citation patterns vary by query type. Understanding the patterns drives optimization decisions.
Definitional queries ("What is X?", "How does X work?").
- Claude prefers Wikipedia (often as the first cited source for general definitions)
- Substantive educational content from established publishers (Britannica, encyclopedia.com, university extension publications)
- Brand or product sites for product-specific definitions when the brand has substantive content
Factual lookup queries ("What is X's price?", "How long does X take?", "When was X founded?").
- Claude prefers primary sources (the company's own site for company facts, government statistics for general facts, authoritative reference data)
- Hedges or declines to cite when authoritative data is not freely available
Comparison queries ("What's the difference between X and Y?").
- Claude tends to cite both subjects' own primary content
- Adds neutral third-party comparison from established publishers
- Less likely to cite affiliate-heavy comparison sites than ChatGPT
- More likely to flag the comparison's limitations (different criteria, different use cases) than to assert a winner
Recommendation queries ("What's the best X for Y?", "Should I do X?").
- Most conservative behavior; Claude often declines to make a single recommendation and instead provides a framework for the user to evaluate
- When citing, prefers credentialed expert content (named-author reviews, professional buying guides) over generic listicles
- Defers to professional consultation on YMYL recommendation queries
News and current events queries.
- Claude prefers established news publishers with strong editorial reputation
- Hedges on rapidly evolving stories, distinguishing between what is reported and what is verified
- Cites multiple sources when reports diverge
Technical and research queries.
- Claude prefers academic publications, official documentation, and authoritative technical references
- Cites primary research papers when available
- Distinguishes between peer-reviewed and preprint sources
Brand and product queries.
- Claude cites brand sites readily when they have substantive content and named authorship
- Less likely to cite brand sites that read as marketing copy without primary information
- More likely to cite established review publications and credentialed third-party reviewers than ChatGPT
What Content Wins Citations on Claude
Content patterns that consistently earn Claude citation share:
Substantive depth. Pillar pages and definitive guides outperform shallow listicles. Claude extracts substantive context, not just shallow facts.
Named authorship with verifiable credentials. Person schema with sameAs links to verifiable third-party records (academic credentials, professional licenses, industry certifications, recognized publication archives) is the largest single citation lever. Anonymous content rarely earns Claude citation.
Primary-source citations. Pages that cite government data, academic research, regulatory publications, and primary documents earn higher citation than pages citing aggregators or competitor blogs.
Specific, dated claims. "The 2026 IRA contribution limit is $7,000 for individuals under 50, per IRS Notice 2025-XX" earns citation; "Roth IRA contribution limits depend on your income" gets ignored.
Editorial transparency. Disclosed editorial review processes, named reviewers, last-reviewed dates, and correction policies all signal editorial discipline that Claude rewards.
Substantive disclaimers where appropriate. Brands that include "consult a qualified professional for advice on your specific situation" on YMYL content earn higher Claude trust than brands that present AI-replaceable certainty. Counterintuitively, the disclaimer increases citation eligibility because it aligns with Claude's harm-avoidance principle.
Schema completeness. Article, Person, Organization, FAQPage, Product, and other relevant Schema.org markup with sameAs links to authoritative third-party records.
Quality over quantity. Brands that publish 50 substantive deeply-researched articles outperform brands that publish 500 shallow listicles in Claude citation share, even when the shallow listicles outperform on traditional Google rankings.
Common Mistakes Optimizing Only for ChatGPT
Five mistakes brands make optimizing for ChatGPT in ways that underperform on Claude.
1. Anonymous SEO-optimized content marketing. Pages built for keyword targeting without named authorship. Often performs on Google and ChatGPT but underperforms on Claude. Fix: add named-author bylines with Person schema and verifiable credentials.
2. Aggregator-style content without aggregator infrastructure. Brands publishing broad comparison content (best of X, top 10 Y) without licensed data, primary research, or credentialed editorial staff. Often performs on ChatGPT but rarely earns Claude citation. Fix: focus on practitioner-perspective content with named authors rather than aggregator emulation.
3. Confident assertions on YMYL topics without disclaimers. Health, legal, and financial content that asserts AI-replaceable certainty. Reduces Claude citation eligibility. Fix: include appropriate professional-consultation language; counterintuitively, disclaimers increase citation rather than decrease it.
4. AI-generated content without human review credit. Pages identifiable as AI-generated without disclosed human review. Filtered more aggressively by Claude than by ChatGPT. Fix: human editorial review with named reviewer credit on AI-assisted content.
5. Optimization tracking ChatGPT alone. Reporting that tracks ChatGPT citation share without tracking Claude separately. Misses Claude-specific optimization opportunities. Fix: separate citation tracking per major AI engine; optimize for the highest-bar engine (typically Claude on YMYL, Perplexity on technical, ChatGPT on broad consumer queries) and the optimization compounds across the others. The pattern follows what we cover in the citation analytics playbook and the unified AEO program structure.
Implementation Priorities
A prioritized work list for brands seeking Claude citation share:
Foundation:
- Named-author bylines on every editorial page with Person schema and full sameAs link set
- Primary-source citations on every substantive claim (government data, academic research, regulatory publications, primary documents)
- Schema completeness audit (Article, Person, Organization, FAQPage, Product, etc.)
- llms.txt at the site root publishing the brand authority profile
Content depth:
- Audit existing pillar content for depth, specificity, and primary-source backing
- Rebuild thin pages or remove them if they cannot be brought to depth standard
- Add specific dated claims, quantitative data, and named entity references throughout
YMYL discipline:
- Editorial review credits on health, legal, financial content
- Disclaimer language including professional-consultation deferral where appropriate
- Credentialed reviewer disclosure with credentials linked to verifiable third-party records
- Last-reviewed date on every YMYL page
Measurement:
- Per-engine citation tracking dashboard (ChatGPT, Claude, Perplexity, Gemini, Microsoft Copilot)
- Quarterly review of citation share by query category and engine
- Optimization tuning per engine where the patterns diverge
The disproportionate reward. Brands that build for Claude's higher trust threshold typically see compounding citation share across all major AI engines. Claude's bar is the highest; meeting it produces visibility across the others. Brands optimizing to ChatGPT's lower bar leave Claude (and often Perplexity and Gemini) visibility on the table.
Capconvert deploys per-engine GEO tracking and optimization across our 300+ client portfolio and 90,000+ delivery hours. The framework above produces measurable Claude citation share alongside broader AI surface visibility.
If your brand is appearing in ChatGPT citations but absent from Claude, the structural fix (named authorship, primary sources, depth, editorial transparency, disclaimers where appropriate) compounds with broader GEO work. Run a Capconvert audit and we will return a 90-day plan covering authorship rollout, primary-source citation discipline, content depth audit, and per-engine measurement tailored to your brand and content categories.
Ready to optimize for the AI era?
Get a free AEO audit and discover how your brand shows up in AI-powered search.
Get Your Free Audit