Every week, somewhere in a Slack channel or client report, someone asks the same question: "Does Quality Score even matter anymore?" The confusion is understandable. Google keeps pushing automation, broad match, AI Max, and keyword-free campaigns. Manual levers are disappearing. And Google itself has stated plainly that Quality Score is not a key performance indicator and should not be optimized or aggregated with the rest of your data-and that it is not an input in the ad auction.
That statement trips people up. If it's not used in the auction, why care? Because here's the nuance that separates competent PPC managers from everyone else: the visible Quality Score is a diagnostic metric and isn't directly used in the auction, but Ad Rank still considers expected CTR, ad relevance, and landing page experience-the same components that make up Quality Score. The 1–10 number is a proxy. The underlying signals it reflects are very much alive in every single auction. This post breaks down what Quality Score actually is, what changed (and didn't), and exactly how to improve the three components that determine whether you pay $2 or $5 for the same click.
What Quality Score Measures-And What It Doesn't
Quality Score is calculated based on the combined performance of three components: expected clickthrough rate (the likelihood that your ad will be clicked when shown), ad relevance (how closely your ad matches the intent behind a user's search), and landing page experience (how relevant and useful your landing page is to people who click your ad).
Each component is evaluated with a status of "Above average," "Average," or "Below average," based on a comparison with other advertisers whose ads showed for the exact same search over the last 90 days. It's a relative score. You're not measured against some abstract ideal-you're measured against every competitor bidding on the same term.
Quality Score is a score between 1 and 10 for each keyword, and it influences how much you pay for each click you receive on your ads. But it's worth remembering what it does not capture. Quality Score doesn't account for device differences, geographic location variations, or the specific context of each individual search. Those factors all influence Ad Rank at auction time, but they're invisible in the 1–10 number you see in the interface.
The Visible Score vs. Auction-Time Quality
This distinction matters more than most practitioners realize. The Quality Score you see in your account is not what is used in the auction. It's a suggestion of what you can improve to increase your auction quality score. So don't get hung up on the actual numbers.
Auction Quality Score is what Google uses in the auction to determine if your ad can show and its position. This number takes many factors into account such as the query, query intent, ad, location, time of day, device, and many other factors. This number is not visible in your account as it is calculated at the time of the auction. Think of visible QS as a lagging indicator-a summary that points you in the right direction but doesn't capture the real-time calculus happening in every auction. As PPC expert Fred Vallaeys puts it, Quality Score was never meant to be a profitability metric. It wasn't designed to tell you whether a campaign is winning. Rather, it was built to protect relevance in the auction, to make sure useful ads could compete without simply bidding the highest.
Why Quality Score Still Matters in an AI-Driven Platform
Google Ads in 2026 is a fundamentally different platform than it was even three years ago. The biggest overall shift is that Google Ads is becoming more AI-assisted across Search, creative assembly, campaign matching, and ad eligibility in AI-powered search experiences. AI Max campaigns can now run keyword-free Search, where the interface asks for a landing page, a daily budget, and a CPA or ROAS target, and everything else-including the queries your ads match, the ad copy that appears, and the bidding strategy-is handled by Gemini.
Given all this automation, skeptics argue QS is a relic. Google Ads Product Liaison Ginny Marvin disagrees. In a PPC Townhall interview, she stated: "The whole goal is to serve an ad and a landing page that is highly relevant to the user...Fundamentals are still fundamental and they really haven't changed very much."
That framing is important. What has changed is the surface area. Broad match expands coverage. AI systems use landing page content as a signal. And more variables influence how relevance is evaluated. The visible QS number might look blunt. But the principles behind it-intent match, click behavior, post-click experience-are being evaluated with more granularity than ever.
Landing Page Content Is Now a Stronger Signal
One shift that practitioners report is the growing weight of landing page quality. The importance of landing page content and page experience has gone up quite a bit over the years, especially with AI Max, which uses landing page content to determine ad matching. AI Max's final URL expansion feature analyzes your website and selects the most relevant landing page for each user's search query, potentially overriding the final URL you specified in your ad.
This means your landing pages are doing double duty. They influence both Quality Score's landing page experience component and which queries Google's AI decides to match your ads against. Good paid search landing pages in 2026 often borrow from good SEO principles: strong topical clarity, clear structure, useful details, and language that aligns with real user questions.
The Three Components: How Each One Works and What to Fix
Expected CTR: The Heaviest Weight in the Formula
Expected CTR is Google's prediction of whether your ad will get clicked for a given keyword. It's Google's prediction of how well your ad should perform for a particular keyword, based on what other advertisers, both past and present, have achieved for that same keyword. According to reverse-engineering work by the Adalysis team, ad relevancy is only 22% of your Quality Score , while expected CTR and landing page experience carry significantly more weight. The formula they uncovered follows the structure: 1 + Landing Page Experience weight + Ad Relevance weight + CTR weight.
Practical ways to improve expected CTR:
- Write headlines that answer the query, not describe your company. An ad for "CRM software for small teams" should lead with "CRM Built for Teams Under 50"-not "Leading Software Solutions Provider."
- Use ad assets aggressively.
Sitelinks, callouts, structured snippets, and other assets make your ad bigger and more informative on the results page, which naturally increases CTR.
- Pause low-CTR ads ruthlessly.
If you have an ad with a high number of impressions and low clicks, it will lower your quality score. Keep 2–3 active ads per ad group, and let losers go. - Add negative keywords. Adding negative keywords prevents your ads from showing on irrelevant searches. This ensures that only the most relevant search queries trigger your ads, improving ad relevance and expected CTR.
Ad Relevance: The Organizational Signal
Ad relevance measures how tightly your ad copy matches the keyword's intent. When your ad is specific to the keyword, then you can have an above average Ad Relevance. Despite carrying less raw weight in the formula, fixing ad relevance often cascades into improvements elsewhere. Optimizing ad relevancy often improves your organization and therefore can make improving your other factors much easier. In some cases, only working on Ad Relevancy will increase the other Quality Score factors.
The fix is almost always structural. If an ad group contains keywords with mixed intent-say "orthodontics" and "invisible braces" and "dentist near me"-no single ad can be relevant to all three. Google is telling you this keyword works well with the current ad, this one's okay, this one's bad. In other words, you can't write a good ad for all three of these keywords.
The move: split those keywords into separate ad groups, each with tightly aligned ad copy. One ad group should equal one clear intent. Don't dump disparate themes together and expect a single responsive search ad to bridge the gap.
Landing Page Experience: Where the Real Money Is
Landing page experience evaluates whether your page delivers on the ad's promise. Landing page experience measures how well your landing page resonates with the users' intent after they click on your ad. To determine your landing page score, Google uses a combination of automated systems and human evaluation.
This is where many advertisers leave the most money on the table-and where the gap between mediocre and excellent accounts keeps widening. Speed is non-negotiable. Google's Core Web Vitals guidelines (Largest Contentful Paint under 2.5 seconds) align with performance expectations that also shape ad landing experience metrics. Anything above 4 seconds begins to exhibit a significant drop-off in conversions and can negatively affect Quality Score. Real-world data backs this up: in retail, a 1-second delay in mobile can impact mobile conversions by up to 20 percent.
Relevance means specificity. The best way to improve your landing page experience score is to pick the most relevant page on your site for a particular ad group. If you want to show ads for the keyword "gold necklaces" you want to send people to the category page with gold necklaces, not to your homepage.
Mobile-first isn't optional. Over 60% of local searches happen on mobile. If your landing page is not optimized for mobile conversion, you are wasting the majority of your clicks.
Tools to audit landing page performance: Google PageSpeed Insights for Core Web Vitals, the Google Ads Landing Pages report for per-URL performance data, and behavior analytics tools like Hotjar for heatmaps and scroll depth.
The CPC Math: What a Higher Quality Score Actually Saves You
Understanding the financial impact of Quality Score shifts this from "nice metric" to "profit lever." Google compares your quality score to the average, which is 5/10. If your quality score is 10, you'll get a 50% discount on your CPC. If your quality score is 7, you'll get a 29% discount. If your quality score is 4, you'll pay 25% more for that click.
At scale, these percentages compound fast. In competitive markets, a 1–2 point increase in Quality Score can reduce CPC by 10–25%. That compounds quickly at scale. An account spending $50,000 per month that moves its weighted average QS from 5 to 7 could save $14,500 monthly-without changing bids, budgets, or targeting. The CPC formula in the second-price auction reinforces why: CPC = (Ad Rank of the Advertiser Below / Quality Score) + $0.01. A higher Quality Score in the denominator means you pay less for the same position. A lower one means you're subsidizing poor relevance with cash.
How to Diagnose Quality Score Problems in Your Account
Don't try to optimize Quality Score as a single number. Instead, diagnose each component separately and prioritize by impact. Step 1: Add the right columns. Navigate to "Campaigns" > "Audiences, keywords, and content" > "Search keywords," then click "Columns" > "Modify columns," and select Quality Score, Expected CTR, Ad relevance, and Landing page experience. For historical data, check the boxes with "(hist.)" next to them.
Step 2: Filter for high-spend, low-QS keywords first. Sort by cost descending and filter for Quality Score below 5. These keywords are hemorrhaging budget on inflated CPCs. This is where you'll find the highest-ROI fixes. Step 3: Read the component ratings, not just the number. If CPC is high and impressions are low, review Ad Rank and keyword eligibility. If impressions are stable but CTR is weak, prioritize improvements in expected CTR and ad copy relevance. If CTR is strong but conversions are low, evaluate landing page experience and intent alignment.
Step 4: Track trends over time. The Google Ads interface stores historical QS data, but it's clunky. Tools like Optmyzr and Adalysis make this significantly easier. Optmyzr's Quality Score Tracker records QS daily at the keyword level and breaks out the three components weighted by impressions. That makes it easier to pinpoint where the friction actually sits.
Step 5: Accept that some low QS is normal. Keywords with low Quality Scores can still perform well, and sometimes even better than higher Quality Score keywords. Competitor keywords will always have a low Quality Score but can still convert well. B2B keywords frequently score lower because of niche search volumes and intentionally narrow targeting. Google not understanding the nuances of B2B is a common issue. If you're a B2B advertiser, and your keywords are relevant, you shouldn't worry about low quality scores.
Quality Score in an AI Max and Broad Match World
The tension between traditional QS optimization and modern automation is real. Google's February 2026 Ads Decoded podcast explicitly positioned keywords as "thematic signals," a conceptual framework where keywords guide the algorithm rather than constrain it. When the platform decides which queries to match and which landing pages to serve, the advertiser's control surface shrinks. But that doesn't make quality irrelevant-it makes it more important as an input signal. Landing page signals-beyond copy, elements like engagement metrics-tell Google whether the promise of the ad was kept. This creates a feedback loop that tells Google whether the promise of the ad was kept.
The risk is that weak account foundations now get amplified faster. If your conversion tracking is poor, if your landing pages are thin, if your CRM feedback loop is incomplete, or if your campaigns mix too many different business goals together, automation can push budget into the wrong places with more efficiency than before.
Here's the practitioner's playbook for QS in an automated account:
- Invest in landing page depth.
Pages need to answer broader search intent, offer more topic depth, and provide stronger alignment between ads, search context, and destination content. Thin squeeze pages with a form and three bullet points won't satisfy the AI's relevance scoring. - Monitor search term reports weekly, even in broad match and AI Max campaigns. Irrelevant query matches drag down expected CTR at the keyword level. - Feed clean conversion data back into the system. Signal pollution occurs when low-quality, conflicting, or misleading signals contaminate the data Google's AI uses to learn. Junk leads train bidding algorithms in the wrong direction. - Use Quality Score as a canary. If impression share is being lost primarily due to rank, and CPA is creeping up, low QS may be contributing to inefficient auctions. If most keywords in a theme show "Below average" ad relevance, that often signals structural misalignment. If landing page experience is consistently poor across tightly themed keywords, that's not noise. That's friction.
A Repeatable Quality Score Improvement Framework
After auditing hundreds of accounts, a consistent sequence emerges for improving Quality Score without upending a working campaign structure. Week 1: Clean the bottom of the barrel. Pause or restructure keywords with QS 1–3 that carry meaningful spend. Add negative keywords from the search terms report. Remove ad groups where intent is visibly mixed. Weeks 2–3: Fix landing page mismatches. For every ad group with "Below average" landing page experience, verify that the destination page matches the keyword's commercial intent. Run PageSpeed Insights. Fix any page loading above 3 seconds on mobile. Ensure the headline, body content, and CTA on the landing page echo the ad copy's promise. Weeks 3–4: Rewrite ads for relevance and CTR. Create new responsive search ad variations where at least 3 of your 15 headlines contain the exact keyword theme. Write descriptions that address the specific pain point behind the query-not generic brand messaging. Pin your strongest keyword-matching headline to position 1 only if CTR consistently suffers without it. Ongoing: Test and trend. Monitor QS weekly. Track historical QS alongside CPC and CPA to see if score improvements translate into cost savings. You could go from a 3% CTR to a 9% CTR, essentially tripling your CTR, and your visible quality score might not change, but you could still see the results of the improvement such as higher ad positions. The visible number is a guide, not a scoreboard. Watch the metrics that matter: CPA, ROAS, and impression share lost to rank.
The Real Benchmark: Performance, Not Perfection
Chasing a perfect 10 across every keyword is a waste of time. For non-branded terms, a 7 is a good quality score. You may achieve some 6s or even 10s, but in general, 7 is a good number.
For brand terms, you should almost always have a 10. If your branded keywords score below 8, something is structurally wrong-usually a competitor's ad copy poaching your brand CTR, or a landing page that doesn't clearly represent the brand.
Revenue matters more than your quality score. If you are getting great conversions from a keyword, even if it's a quality score 4, that keyword is performing well for you. The worst thing you can do is rewrite an ad that's driving profitable conversions just because a red label appeared in a dashboard. Quality Score deserves attention, but it deserves the right kind of attention. Treat it as what Google built it to be: a warning light for a car's engine that shows how healthy your ads and keywords are-not a detailed metric that should be the focus of account management. Use it to find friction. Fix the friction. Then measure success by the metrics your business actually runs on-CPA, ROAS, revenue, and margin. The score will follow the performance. It was always designed to.
Ready to optimize for the AI era?
Get a free AEO audit and discover how your brand shows up in AI-powered search.
Get Your Free Audit