SEOJan 15, 2026·12 min read

Google Discover Core Updates Explained: How the February 2026 Update Reshapes Content Strategy

Capconvert Team

Content Strategy

TL;DR

On February 5, 2026, Google did something unprecedented. The February 2026 update, which ran from February 5 to February 27, was scoped exclusively to Discover and did not affect Search rankings - the first time Google publicly labeled a core update as Discover-only. Not a tweak to search rankings. Not a spam filter.

On February 5, 2026, Google did something unprecedented. The February 2026 update, which ran from February 5 to February 27, was scoped exclusively to Discover and did not affect Search rankings - the first time Google publicly labeled a core update as Discover-only. Not a tweak to search rankings. Not a spam filter. A dedicated core update targeting only Google Discover - the AI-curated feed that recommends content to over 800 million users before they type a single query. The significance of this move is hard to overstate. An analysis of more than 400 news publishers found that Discover's share of Google-sourced traffic had nearly doubled in two years, climbing from 37 percent in 2023 to roughly 68 percent by the time this update launched, while traditional web search traffic to news publishers dropped from 51 percent to approximately 27 percent. Discover is no longer a bonus traffic channel. For many publishers, it is the channel. And Google just rewrote the rules for how it works. If your content strategy treats Discover as a nice-to-have afterthought, the February 2026 update is your wake-up call. Some sites saw Discover impressions drop by 40 percent almost overnight, while others picked up traffic they had never seen before. Understanding what changed - and why - is now essential for anyone who depends on organic visibility from Google.

Why Google Built a Standalone Discover Core Update

Previous core updates affected Search and Discover simultaneously. The February 2026 update operates independently and focuses solely on the Discover feed algorithm, signaling Google's intention to treat Discover as a distinct product with its own quality standards and ranking criteria.

This separation reflects a structural reality about how users interact with Google content. Search responds to explicit demand - someone types a query, and Google returns results. Discover is an AI-curated feed that surfaces articles, videos, and pages based on individual user interests and browsing behavior. Users never type a query. Content is recommended to them, making it a fundamentally different channel that requires a distinct optimization approach.

That distinction matters because it changes Google's quality threshold. Discover is a recommendation system, and Google is more cautious about what it proactively pushes to users than what it returns for explicit queries. When Google pushes an article into someone's feed without being asked, it's putting its own reputation on the line. The February update reflects that higher bar. The broader context adds urgency. Google search traffic to publishers declined globally by a third in the year to November, and referrals from Google Discover were down 21% year on year, according to Chartbeat data published in the Reuters Institute's Journalism and Technology Trends and Predictions 2026 report. Publishers are losing traffic across every Google surface. The ones who understand Discover's new scoring system will capture what remains.

The Three Pillars of the February 2026 Update

Google was uncharacteristically specific about what this update targets. The update will improve the experience by showing users more locally relevant content from websites based in their country, reducing sensational content and clickbait in Discover, and showing more in-depth, original, and timely content. Each of these three goals carries concrete implications for content strategy.

Geographic Relevance: Your Country, Your Feed

The geographic signal is the most structurally disruptive change. Google now prioritizes content from websites based in the same country as the user viewing the Discover feed. U.S. users will predominantly see content from U.S.-based publishers, while users in other countries will see content from local sources in their region.

NewzDash's tracking data illuminates what this looks like in practice. When comparing California, New York, and the US national Top 1000 articles lists, the three feeds share a large common core, but each state shows a meaningful "local layer" of content that differs from the national feed. The California feed contains California-specific stories from publishers such as SFGate, LA Times, Sacramento Bee, and SF Chronicle.

The geographic shift has a counterintuitive wrinkle for local publishers. These publishers did not lose their home audiences. They lost audiences in states they were never intentionally targeting. That distinction matters for how you respond. A regional news site in upstate New York may see total Discover traffic decline, but the loss is concentrated in feeds thousands of miles away - feeds where that content was never particularly relevant in the first place. For international publishers, the headwinds are measurable. The country-level domesticity preference creates measurable headwinds for non-US publishers seeking US Discover traffic. The international publisher share declined in normalized scores, with notable drops for The Guardian, Reuters, and The Independent.

Clickbait Suppression: Headline Engineering Meets Its Limits

The second pillar targets sensational content - and the data shows Google is serious about enforcement. Autoevolution had 5 articles in the pre-update US Top 1000, all following a near-identical "dramatic reveal" formula. Post-update, Autoevolution has 0 articles in the US Top 1000. This is one of the cleanest examples of Discover potentially identifying and suppressing repetitive sensational templates.

Publishers who understood Discover's older engagement-weighted algorithm optimized aggressively for headline engineering without corresponding investment in content depth. This worked because engagement metrics told Discover's algorithm that users wanted this content. The February 2026 update introduces friction into that feedback loop. Google is now attempting to evaluate whether the content attached to a headline - and the publisher behind it - has demonstrated expertise on the topic being covered. A headline engineered to trigger curiosity no longer operates in isolation from the content quality signals associated with the domain publishing it.

This doesn't mean strong headlines no longer matter. Clear, accurate, specific headlines that tell a reader exactly what they will learn from an article remain effective. The line between compelling and manipulative is whether the content behind the headline delivers on the promise. A title like "Complete Guide to React Server Components (2026)" outperforms "The React Feature That Changes Everything" under the new scoring model.

Topic-Level Expertise: The Deepest Structural Change

The third pillar is where this update gets consequential. Google said it would prioritize content from sites with demonstrated expertise in a given area. This means Google is making an active assessment of what topics a site genuinely covers with depth, and using that assessment to determine eligibility for Discover placement. A site doesn't get recommended for a topic just by publishing something about it.

This is a shift from domain-level to topic-level evaluation. Google's shift toward evaluating expertise on a topic-by-topic basis rather than at the domain level changes how publishers should think about content strategy. Under a domain-level authority model, a site's overall credibility is the primary signal, and a high-authority domain can publish on almost any topic and expect reasonable visibility. That era is ending. The practical consequence? Sites that publish a wide range of topics but with limited depth in any given area are now competing against publishers who have concentrated depth in those areas. Topic breadth without topical depth is no longer a competitive advantage in Discover.

What the Data Actually Shows: Winners, Losers, and Contradictions

The rollout lasted 22 days - about 8 days longer than Google's original estimate of up to two weeks. Now that it's complete, two independent tracking tools - NewzDash and DiscoverSnoop - have published analyses. They agree on several patterns and diverge on others, which is itself instructive. Publisher consolidation is real. According to analysis from NewzDash's DiscoverPulse tracking tool, the number of unique domains appearing in the US Top 1000 Discover placements dropped from 172 to 158 in the post-update window - an 8.1% decline. Fewer publishers are getting top slots, but publishers with strong regional relevance and clear topic focus may have benefited. Discover covered more topics in the post-update window, but fewer sites were appearing in top placements.

Major losers fit a pattern. Yahoo lost nearly half its article placements and saw its audience score drop 62%. Fox News, Fox Business, and Fox Weather all saw visibility drop more than 40%. Forbes lost 21% of its article placements and 67% of its audience score.

YouTube gained ground. YouTube placements grew 15% in the post-update window, from 16,283 to 18,803.

Discover surfaces video alongside articles. A publisher that produces both written and video content on its core topics has two distribution pathways into the Discover feed rather than one.

The trackers disagree on some sites, which highlights an important methodological point. If you're benchmarking against third-party reports, check which measurement window each vendor used. A tool measuring mid-rollout may show different results than one measuring post-completion. This is worth remembering before drawing conclusions from any single data source.

How to Diagnose Your Own Discover Traffic Changes

Before adjusting your strategy, you need an accurate diagnosis. A drop in Discover traffic doesn't indicate your organic Search rankings changed.

Publishers should confirm which channel - Discover or Search - is responsible for any traffic change before drawing conclusions about their overall SEO health. Google Search Console separates Discover performance from Search performance in its reports.

Start with geographic analysis. If you have access to state-level or country-level data for your Discover traffic, compare pre- and post-update distributions. The drop may be heavily concentrated in specific markets. That pattern tells you whether you're facing a geographic recalibration or a content quality evaluation.

Then evaluate headline-to-content alignment. Review your highest-Discover-traffic articles and ask whether the headline accurately represents the depth of the content. If your top-performing Discover pages have sensational titles with thin substance underneath, the update likely targeted your content for the right reasons. Timing matters for analysis. During a core update rollout, Google's systems are actively re-weighting signals. Rankings can swing day to day. Historical data shows that sites that rewrite large sections of content, delete pages en masse, and make aggressive structural changes mid-rollout often struggle to recover. As a best practice, wait at least 14 days after the rollout completes before making major SEO decisions.

One of the most useful diagnostic moves in Search Console is often overlooked. Sort the Discover tab by impressions rather than clicks. These are pieces of content that Discover found to be a good fit for the audience, even if they didn't always get the best click-through rates. It might be worthwhile to focus on creating more content like those.

Building a Discover-Resilient Content Strategy

The February update isn't a one-time disruption. It's a signal about where Google is headed. The update signals that Discover deserves its own operational strategy rather than being treated as an accidental bonus on top of classic SEO. It also reinforces a larger pattern: Google is becoming more selective about topic-level expertise, more sensitive to original value, and less tolerant of sensational packaging or scaled sameness.

Here's what a Discover-resilient content operation looks like in practice.

Concentrate Topical Depth Over Topical Breadth

Focus on depth over breadth - five excellent articles in your niche are worth more than twenty thin posts across unrelated topics.

Sites that cover dozens of unrelated topics dilute their authority signals and are less likely to be recommended for any individual topic.

Build content clusters that reinforce each other. Internal linking between related articles strengthens topic clusters that Discover now evaluates at the site level. A site that publishes ten interconnected articles on cloud security - covering compliance, architecture patterns, vendor comparisons, and incident response - sends a stronger topic-authority signal than a site with one cloud security piece among hundreds of unrelated posts.

Maintain Publishing Cadence

Google made changes to how it evaluates content velocity, meaning the frequency and recency of your publishing cadence now plays a bigger role in whether Discover picks up your pages. If your last blog post went live three months ago, Discover is less likely to surface even your best-performing older content.

The target isn't volume - it's consistency. Websites that publish at least 2-3 high-quality articles per week typically achieve more stable Discover visibility than those with sporadic publishing schedules.

Publishing 10 articles one week and then nothing for a month sends mixed signals to Discover. Consistent cadence builds the publisher trust signal that keeps your site in the recommendation pipeline.

Invest in Visual Quality

Discover is a visually driven feed. Pages without compelling imagery are effectively invisible. Use original, high-resolution images at minimum 1200 pixels wide. Enable the max-image-preview:large robots meta tag to allow Google to display large image previews.

This is not optional. Raptive's analysis across their network found that a single meta tag correction on one site drove 400% growth in Discover traffic. The max-image-preview:large meta tag is the single highest-ROI technical fix available. Check yours today.

Strengthen E-E-A-T Signals at the Author and Entity Level

E-E-A-T signals have become even more decisive for Discover placement since the February 2026 update. Google evaluates not just the individual article but the entire website and its reputation.

Practical steps matter here: Include detailed author bios with credentials, link to author profiles on professional platforms, and demonstrate first-hand experience in the content. Beyond author pages, consider whether your site has a clear entity presence. Does your organization have a Wikidata record? Is your schema structured to declare relationships between your company, your products, and the topics you publish around? Do your content clusters link back to a primary page that anchors your authority in a specific subject area?

Discover, AI Overviews, and the Multi-Surface Future

The February update doesn't exist in isolation. The same entity signals that determine Discover eligibility are the ones that drive citeability in AI Overviews, and the same structured relationships that make your content machine-readable for Google are what external LLMs like ChatGPT and Perplexity use to retrieve and attribute information accurately.

Google is moving toward a model where content surfaces across multiple touchpoints - traditional search results, Discover feeds, AI Overviews, and AI Mode. Each of these surfaces evaluates content slightly differently, but they all reward the same core qualities: topical authority, freshness, engagement, and production quality.

This convergence is the real strategic insight. A content program optimized for Discover - original research, strong visual presentation, identifiable expertise, consistent publishing rhythm - is also one that performs well across AI-powered surfaces. The February 2026 update's emphasis on expertise, original reporting, and quality aligns with broader trends in AI-powered information retrieval. As Google increasingly surfaces AI Overviews and language models cite sources, the same quality signals that improve Discover visibility also enhance citation likelihood in AI-generated responses.

What to Watch Next

Google hasn't said whether Discover will continue to get its own core updates. The February update is still limited to English-language users in the United States, with expansion to more countries and languages planned but not scheduled.

Additionally, a March 2026 core update began on March 27 - this one targeting traditional Search rankings - which means publishers are navigating two overlapping algorithm shifts simultaneously. Publishers should expect continuous minor algorithm refinements throughout the year, major announced updates potentially 1-3 times annually, seasonal fluctuations in Discover traffic based on changing user interests, and personalization adjustments as Google refines individual user preferences.

The February 2026 Discover update marks a turning point not because it changed a few ranking signals, but because it formalized what practitioners have been sensing for years: Discover is now a primary content distribution channel, and Google intends to hold it to a quality standard that matches its scale. The harder work is developing genuine points of view on the topics you cover, coining the concepts that define those conversations, and producing content that competitors can't replicate by publishing more. When you become the source that other sources reference for a specific idea, you've built something that no algorithm update can easily displace.

That's not an algorithm hack. It's the foundation of durable visibility - across Discover, Search, AI Overviews, and whatever surface comes next.

Ready to optimize for the AI era?

Get a free AEO audit and discover how your brand shows up in AI-powered search.

Get Your Free Audit