GEO PROGRAM · PLATFORMPart of the GEO Program · 2 of 5 platforms

Get cited by Claude.

Claude is the enterprise-favored AI assistant — 18.9 million monthly active users, 300,000+ business customers, and roughly 29% of the enterprise AI market. Anthropic's research focus on factual accuracy and primary-source citations makes Claude one of the most reference-conservative LLMs in production. The Capconvert GEO Program for Claude is built around earning citations from a model that holds a high bar for what it's willing to cite.

01 · OVERVIEW

How Claude decides who gets cited.

Claude is Anthropic's flagship assistant — distinguished in production by a research-driven emphasis on factual accuracy, source attribution, and refusing to fabricate when it doesn't know. For GEO, this matters: Claude is more conservative about what it will cite, and more likely to attribute clearly when it does. The bar to be cited by Claude is higher; the trust signal of being cited is correspondingly stronger.

Claude's web search uses Brave Search as its retrieval backbone — a deliberate Anthropic decision tied to Brave's independent index and privacy guarantees. This is the single most distinctive optimization fact about Claude: ranking on Brave Search → eligible to be cited by Claude, in a way that doesn't quite map to any other LLM.

CAPCONVERT FRAMINGThree questions determine your Claude visibility: does Claude's training data already know your category and your role in it, are you indexed and ranking on Brave Search for your priority queries, and does your content meet Claude's high bar for primary-source attribution and factual integrity?
02 · ARCHITECTURE

The systems behind Claude's answers.

Claude.ai is a stack of model, retrieval, and tool layers — each playing a different role in whether a domain gets cited.

Claude models (Opus / Sonnet / Haiku). Anthropic's frontier model family, with Opus, Sonnet, and Haiku as quality/speed tiers. Closed-book recall is rooted here — the model only "knows" what it learned during training. Anthropic publishes more about training-data curation than most labs and has been increasingly explicit about the role of high-trust sources in training.

Web Search (Brave-powered). When Claude needs current information, it queries Brave Search and grounds the answer in Brave's results. The reranker prefers primary sources, official documentation, and authoritative publishers — the same biases visible in Anthropic's published research on factuality.

Tool use & agentic actions. Claude can invoke tools — including a fetch tool that retrieves a specific URL and reads its content. This is the surface where Claude-User agents live. Sites that block Claude-User break every agentic workflow involving live page reads.

Claude Code & developer surfaces. A separate, fast-growing Claude product running at $2.5B in annualized revenue alone. For technical content — documentation, API references, developer tooling — being cited by Claude Code is its own GEO outcome distinct from claude.ai citations.

Constitutional AI. Anthropic's training methodology biases the model toward acknowledging uncertainty and citing primary sources. The downstream behavior is observable: Claude is more likely to refuse than fabricate, and more likely to attribute than synthesize without attribution.

03 · CITATION SIGNALS

What earns a Claude citation.

Claude doesn't publish ranking factors, but observed citation behavior reveals a consistent pattern: high-trust primary sources, current information, and well-structured content. Optimizing for Claude is closer to optimizing for an analyst than for a search engine.

#1 SIGNAL
Primary-source authority
Official docs, gov/edu sources, original research, manufacturer pages.
#2 SIGNAL
Brave Search ranking
Web search runs through Brave's index. Ranking on Brave is the eligibility gate.
#3 SIGNAL
Citation density on your site
Your own content citing primary sources signals trustworthiness back to Claude.
#4 SIGNAL
Recency
Time-stamped content. Claude is conservative about citing undated pages.
#5 SIGNAL
Bot accessibility
ClaudeBot allowed in robots.txt; Claude-User reachable for live fetches.
#6 SIGNAL
Content clarity & structure
Direct answers, clear hierarchy, schema where appropriate.
04 · PRIMARY-SOURCE AUTHORITY

Why Claude prefers the source.

Claude consistently cites the most authoritative primary source for a claim, even when secondary sources rank higher in standard search. Original research, official documentation, government and academic publications, and manufacturer-direct product pages get cited disproportionately to their backlink-graph weight. This is the single biggest behavioral difference between Claude and Bing-derived AI assistants.

Implication for brands. The version of your content that's most likely to be cited is the version on your own primary domain — not the version syndicated to a publisher, not the version paraphrased on a roundup site. For Claude GEO, owned-content investment compounds harder than off-site citation acquisition does.

Implication for content. Claude rewards content that reads like primary research — original data, firsthand experience, technical depth — over content that summarizes or rewords what already exists. If you can be the canonical source for a category-defining claim, Claude will cite you over the ten secondary sources that summarize you.

Implication for citation density. Pages that cite primary sources clearly and inline are themselves rated as more credible by Claude's reranker. The model has learned to use citation patterns as a proxy for trustworthiness — both inbound and outbound.

PRIMARY-SOURCE GAPWe benchmark your content against the primary-source standard Claude actually applies: does this page have original data, firsthand expertise, or canonical authority for what it claims? Most content fails the test — and that's exactly the work to do.
05 · CONTENT PATTERNS

What Claude rewards in content.

Claude's content preferences are closer to research-paper conventions than to blog-post conventions. The pattern is consistent across queries: clear claims, explicit attribution, and calibrated confidence.

Explicit attribution. Pages that cite their sources inline — with named authors, publication dates, and direct quotes where appropriate — are cited more often than pages making the same claims without attribution. Claude's training has biased it strongly toward this pattern.

Calibrated language. "Roughly," "generally," "in most cases" — language that signals appropriate uncertainty — is cited more readily than over-confident absolutes. Counterintuitive for marketing copy, but reliably observed.

Structure that mirrors Anthropic's research format. Clear thesis, supporting evidence, acknowledged limitations, conclusion. Claude appears to recognize this structure and weight it heavily — likely because Anthropic's training corpus is unusually weighted toward this kind of writing.

Up-front summaries. Pages that begin with a 2-3 sentence factual summary of what they cover — analogous to a paper abstract — extract cleanly into Claude's answers. The same content with the summary buried gets quoted less.

WRITE FOR THE ABSTRACTWe restructure priority pages to begin with a research-style summary: one sentence on what the page covers, one on the headline finding, one on the qualifications. Claude's reranker treats that opening as the page's load-bearing claim — and uses it directly.
06 · TECHNICAL & CRAWL

Letting Claude actually read your site.

Claude has multiple user-agent identities, each with its own role in the GEO pipeline. The technical layer matters less than for ChatGPT (Brave handles the search-side crawling) but the live-fetch pipeline is where most silent failures happen.

ClaudeBot (anthropic-ai). Anthropic's training-data crawler. Allowing it in robots.txt feeds the next training cycle. Like GPTBot, it's commonly blocked by default in Cloudflare's bot-management rules — and the block is silent.

Claude-User. The live user-agent for in-conversation fetches and tool use. Blocking Claude-User breaks every "summarize this page" or agent-driven retrieval workflow involving your URLs. Increasingly important as agentic use cases scale.

Claude-SearchBot. A more recent crawler used for Anthropic's emerging search-and-summary surfaces. Treat it the same as the others — explicit allow in robots.txt, no aggressive WAF blocks.

Brave Search hygiene. Because Claude's web search uses Brave, every Brave-side technical concern compounds into Claude visibility: index coverage on Brave, no over-aggressive bot blocking of Brave's crawler, valid sitemap. Brave SEO and Claude GEO are tightly coupled.

llms.txt + schema. Same baseline as the other GEO platforms — llms.txt deployed, JSON-LD schema for Article / FAQ / Organization / Product / BreadcrumbList. Claude's reranker uses schema as a structural signal, not just a rich-results gateway.

07 · OUR APPROACH

How we get you cited by Claude.

Claude engagements run alongside the GEO baseline, with two distinguishing layers: Brave Search visibility (the retrieval substrate) and primary-source content depth (Claude's distinctive citation bias).

Visibility audit. We run priority queries through Claude (across web-search, closed-book, and tool-use modes) and log what's cited. The output is a citation map showing where your category sits in Claude's response space.

Brave Search optimization. Index coverage, ranking, technical accessibility on Brave — the work mirrors our SEO Brave engagement and feeds Claude's web-search retrieval directly.

Primary-source content program. We invest in content that earns canonical-source status — original research, firsthand technical reporting, manufacturer-grade product detail. The work is slower than secondary-content production and compounds harder.

Bot & technical access. ClaudeBot, Claude-User, Claude-SearchBot confirmed reachable. llms.txt deployed. Schema audited. WAF / bot-management rules reviewed for silent blocks.

Authority & attribution program. Editorial mentions in primary-source-class publications — analyst reports, academic citations where applicable, government registries, manufacturer directories — sequenced to compound across training cycles.

18.9M+
Claude monthly active users
29%
Enterprise AI market share
300+
Brands optimized for AI channels
10y+
Earning cited authority for clients
Want to see whether Claude cites you?