Why isn't my Smart Bidding working?
Smart Bidding underperforms for one of seven reasons in 2026: (1) below conversion threshold (under 30 for tCPA, 50 for tROAS, 15 for Maximise variants — the most common cause), (2) learning period turbulence (first 7 days after any change show unstable bids), (3) conversion data quality issues (duplicates, missing values, missing key event marking, wrong currency), (4) broken Consent Mode V2 signals (V1-only or stripped signals defeat the modelling that fills consent-denied conversions), (5) attribution gaps (gclid not preserved through user journey, payment gateway returns dropping click ID), (6) budget-limited campaigns (Smart Bidding can't optimise when daily budget exhausted by 2pm), (7) target value misalignment (tCPA target unrealistic for the campaign's actual market). The diagnosis order matters — checking conversion volume before consent signals before attribution gaps catches root causes faster than random investigation.
The three-step diagnostic workflow
When Smart Bidding underperforms, work through these in order. Each step takes 10–30 minutes and rules out a class of issues.
Step 1 — Volume and threshold check (10 min)
The fastest diagnostic. Most Smart Bidding underperformance is volume-driven.
- Google Ads → Campaigns → check the Status column. "Limited" or "Eligible (Limited)" indicates data sufficiency.
- Click into the campaign → Settings → Bid strategy. If a learning phase note is showing, the algorithm is still calibrating.
- Check the conversion count for the past 30 days vs the threshold for your bid strategy: 30 for tCPA, 50 for tROAS, 15 for Maximise variants.
- Check the time-of-day impressions chart. If campaigns hit budget and stop serving by midday, you're budget-limited regardless of bid strategy.
If steps 1–4 reveal volume issues, fix those first. No amount of attribution tuning fixes a campaign with 12 conversions when tCPA needs 30.
Step 2 — Conversion data quality check (15 min)
Once volume is sufficient, validate the conversions Smart Bidding is learning from are clean.
- GA4 Admin → Events → check key event marking. Every conversion event you want feeding Smart Bidding must be marked as a key event (formerly "conversions" toggle).
- GA4 Admin → Product links → Google Ads. Confirm conversion import is enabled.
- Run the BigQuery duplicate transactions query (see *Duplicate Transactions in GA4*). Duplicates inflate conversion counts and break Smart Bidding's value learning.
- GA4 Monetisation → check currency consistency. Mixed currencies break tROAS optimisation.
- Check for missing values. Conversions with
value: 0or missing values count toward volume but not toward tROAS learning. They drag down the average value the algorithm targets.
The cleanup priority: fix duplicates first (they affect everything), then missing values (specifically affects tROAS and Maximise Conversion Value), then currency consistency.
Step 3 — Attribution and consent check (30 min)
Once conversions are volume-sufficient and clean, verify they're being correctly attributed to ad clicks.
- Pick a recent purchase. Trace its journey. Was the user originally from a Google Ads click? Did gclid persist through to conversion? Tools: GA4 Realtime + Source/Medium reports + your CRM.
- Check Enhanced Conversions match rate. Google Ads → Tools → Conversions → conversion action → Diagnostic. Below 70% indicates implementation issues.
- Verify Consent Mode V2. GA4 Admin → Data Streams → Configure tag settings → Consent settings. Both indicators should be green. If V1 only, modelled conversions aren't filling the consent-denied gap.
- Test the gclid pipeline. Click your own ad in incognito, complete a conversion, verify gclid persisted through the entire journey to the conversion event in GA4.
- Audit payment gateway flow. Stripe, PayPal, Klarna returns can strip gclid — typical 3-5% conversion attribution loss per gateway.
If steps 1–5 reveal attribution issues, those compound conversion volume losses — fixing them unlocks both raw conversion count AND quality of Smart Bidding learning.
The seven failure modes in detail
Failure 1 — Below conversion threshold
Most common single cause. Symptoms: "Limited - data" status, conservative bidding, volume capped below budget, high learning-period CPA.
Fix paths:
- Aggregate conversion events into a single "lead" or "purchase" event
- Switch to Maximise Conversions (15 threshold) instead of tCPA (30)
- Loosen targeting to increase conversion volume
- Wait for natural growth or stay on Enhanced CPC as last resort
Time to see fix impact: 14 days (7-day learning period + 7 days of stable performance).
Failure 2 — Learning period turbulence
Symptoms: unstable CPA/ROAS, volume fluctuating, bids visibly changing day-to-day. First 7 days after any significant change.
Fix path: wait. Don't make changes during learning period. Annotate the change date in dashboards.
Want to see whether attribution loss is already distorting your channel data?
Time to see fix impact: 7-14 days for stabilisation.
Failure 3 — Conversion data quality issues
Symptoms: GA4 revenue doesn't match source-of-truth, items reports show duplicate transaction_ids, currency mismatches in Monetisation report.
Fix path: deploy duplicate detection (server-side dedup), enforce currency on every event, fix missing items arrays. See *Duplicate Transactions in GA4* and *Item Array Integrity*.
Time to see fix impact: 7 days for Smart Bidding to learn from cleaned data.
Failure 4 — Broken Consent Mode V2 signals
Symptoms: GA4 admin shows red consent indicators, modelled conversions absent, GDPR-region performance worse than non-EU.
Fix path: implement V2 properly with all four signals, verify with DevTools. See *TC-019: Consent Mode V2 ad_user_data Parameter Missing*.
Time to see fix impact: 24-48 hours for V2 indicators to turn green; 14 days for modelled conversions to start filling the consent-denied gap meaningfully.
Failure 5 — Attribution gaps
Symptoms: paid traffic showing as Direct, gclid missing on conversion sessions, Enhanced Conversions match rate below 60%.
Fix path: audit gclid preservation through the user journey, fix payment gateway returns, ensure cross-domain tracking is configured, verify Enhanced Conversions implementation.
Time to see fix impact: 14-30 days as Smart Bidding incorporates the recovered conversions.
Failure 6 — Budget-limited campaigns
Symptoms: campaigns hitting budget early in the day, "Limited - by budget" status, Smart Bidding can't pursue high-CPA periods.
Fix path: increase budget if business case supports it; tighten targeting if budget is fixed; switch to Maximise Conversion Value if budget concerns matter.
Time to see fix impact: 1-7 days.
Failure 7 — Target value misalignment
Symptoms: tCPA campaign consistently exceeding target, tROAS hitting only 60-70% of target, volume well below capacity.
Fix path: investigate market reality. If you've set a £20 tCPA target but the realistic CPA is £45, no algorithm can hit your target. Either accept higher CPA, narrow targeting, or accept lower volume.
Time to see fix impact: 14 days post-target adjustment.
What stakeholders should expect
A few hard truths to set with stakeholders before Smart Bidding work:
- Smart Bidding is not magic. It optimises against the data you give it. Garbage data → garbage optimisation.
- First 7-14 days after any change look bad. This is normal. Don't react to it.
- Some businesses don't have enough volume for tROAS. Maximise Conversions or manual bidding is genuinely better at low volumes.
- Compliance and Smart Bidding are linked. V2, Enhanced Conversions, offline import all matter. Treat them as a system, not separate projects.
- Smart Bidding can be wrong. The algorithm optimises for what you ask. If your conversion event is wrong (lead-fill vs closed-deal), the algorithm optimises for the wrong thing reliably.
The narrative for stakeholders: "We're going to fix the inputs. The algorithm will then do its job." Don't promise Smart Bidding will solve underlying data quality problems.
FAQ: Smart Bidding Diagnosis: When It's Not Working and Why
What should a team validate first when smart bidding diagnosis: when it's not working and why appears?
How do I know whether the fix actually worked?
When should this become a full GA4 audit instead of a quick fix?
Related guides for Smart Bidding Diagnosis: When It's Not Working and Why
ChatGPT, Atlas, Perplexity, Comet, Claude: How Each Shows Up in GA4 (2026 Reference)
In 2026, AI traffic in GA4 splits into three buckets. Browsers and assistants that pass clean referrers (Perplexity web, Perplexity Comet, Claude.ai, Copilot, Gemini standalone) appear with a recognisable source / medium like perplexity.ai / referral. Surfaces that strip the referrer (ChatGPT Atlas…
Perplexity Sources Report: How to Influence What It Cites in 2026
Perplexity citations correlate strongly with five factors: (1) ranking in Bing's top 10 for the underlying query (Perplexity uses Bing's index as fallback alongside its own ~5 billion-URL custom crawler), (2) a clear direct answer in the first 50 words of the relevant page…
Check Smart Bidding Diagnosis: When It's Not Working and Why before campaign reporting gets blamed for the wrong issue
Run a free GA4 audit to spot attribution breaks, UTM governance issues, self-referrals, and source/medium loss fast.