How do I use GA4 Cohort Exploration for retention analysis?
GA4 Cohort Exploration groups users by their acquisition date (first session) and tracks their return behaviour over subsequent weeks or months. Build a cohort exploration in Explore → Cohort Exploration. Set cohort granularity (daily, weekly, monthly), cohort size (number of cohorts to show), and the return criterion (which event constitutes a "return" — default is any event, but setting it to a key action like purchase or session_start produces more meaningful retention curves).
The single most important setting: cohort return criterion. "Any event" measures whether users came back at all; a specific event like purchase measures whether users came back and converted — a fundamentally different (and more useful) metric for most businesses. GA4 cohorts are limited to 12 periods. For longer-term retention analysis, BigQuery SQL is required.
Cohort exploration settings
Cohort inclusion criterion
What qualifies a user to be in the cohort. Default: first touch (any event in the property). You can restrict this to a specific event — users who completed registration, users who made a first purchase — to build cohorts of users with meaningful intent rather than all visitors.
Practical example: An e-commerce site should build a cohort based on purchase (first purchase date) not first session. This answers "of users who first purchased in week X, how many purchased again in subsequent weeks?" — a meaningful retention/repeat purchase question. Cohorts based on first session include one-time-visit bounced users, which dilutes retention curves dramatically.
Cohort return criterion
What constitutes "returning" for the cohort metric. Options:
- Any event — user triggered any event (returned to site at all)
- Specific event — user triggered a specific event you choose (e.g.,
purchase,session_start,sign_in)
For SaaS products: set return criterion to session_start to measure active user retention. For e-commerce: set return criterion to purchase to measure repeat purchase retention. For content sites: set return criterion to page_view to measure return readership.
Cohort granularity
- Day — useful for high-frequency apps (gaming, social, news). Cohort 0 = day of acquisition, cohort 1 = next day, etc.
- Week — the most useful for most businesses. Cohort 0 = week of acquisition, cohort 1 = following week.
- Month — useful for SaaS subscription products where monthly retention is the key business metric.
Reading the cohort grid
The cohort grid shows:
- Rows = acquisition cohorts (each cohort = users acquired in a specific period)
- Columns = periods since acquisition (0, 1, 2, 3... weeks or months since first session)
- Values = retention rate or count for each cohort at each period
Period 0: Always 100% (or the full user count) — this is the acquisition period itself.
Period 1: The most important number. What % of users came back in the next period?
The retention curve shape: Healthy products show a flattening retention curve — high initial drop-off that stabilises at a "retained core." Declining products show continuous drop-off with no plateau.
Want to see whether purchase, revenue, or item-level tracking is drifting in your property?
Benchmark retention rates by product type
| Product type | Week-1 retention | Month-1 retention | Healthy floor |
|---|---|---|---|
| E-commerce (repeat purchase) | 15–25% | 10–20% | >10% Week 4 |
| SaaS (active users) | 50–70% | 30–50% | >25% Month 3 |
| Content/media | 20–35% | 15–25% | >10% Month 2 |
| Mobile app (daily) | 25–40% | 10–20% | >5% Day 30 |
Properties below the "healthy floor" for their product type have a retention problem worth addressing before scaling acquisition spend.
Using cohort exploration to measure product changes
The cohort grid is a natural A/B test validator for product changes when a proper experiment framework isn't available:
Method:
- Identify the week/month a product change was deployed
- Compare the cohort row for users acquired the week before the change vs the week after
- Track retention curves for both cohorts over the following 8–12 weeks
If the post-change cohort shows higher week-4 retention than the pre-change cohort (all other things being equal), the change likely improved retention.
Confounders to watch:
- Seasonal traffic changes (holiday vs non-holiday user quality differs)
- Marketing campaign changes (paid acquisition quality varies by campaign)
- Cohort size differences (small cohorts produce unstable percentages)
This method is directional, not causal. Use proper A/B testing for definitive attribution; use cohort comparison for directional evidence.
Minimum cohort size for statistical validity
GA4 cohort exploration is unreliable below certain cohort sizes:
- Below 100 users per cohort: Percentages fluctuate ±10–20% from sampling and small-number effects. Not suitable for decision-making.
- 100–500 users per cohort: Directional only. Week-1 retention ±5%.
- 500+ users per cohort: Generally reliable for weekly retention analysis.
- 1,000+ users per cohort: Reliable enough for stakeholder reporting.
If your property has low weekly traffic (under 1,000 sessions/week), use monthly cohort granularity to accumulate sufficient cohort sizes.
BigQuery cohort analysis for extended retention
GA4 Exploration cohorts are limited to 12 periods. For 6-month, 12-month, or 24-month retention analysis — common requirements for subscription businesses — use BigQuery:
This query produces a cohort/period matrix you can connect to Looker Studio for visualisation, with no 12-period limit.
FAQ: GA4 Cohort Exploration: Retention Analysis That Product Teams Actually Use
What is the first thing to verify when ga4 cohort exploration: retention analysis that product teams actually use affects revenue?
Should I compare GA4 only to the ecommerce platform total?
How do I keep this from breaking after the next release?
Related guides for GA4 Cohort Exploration: Retention Analysis That Product Teams Actually Use
Shopify GA4 Setup: Web Pixel vs theme.liquid in 2026
The 2026 best practice for Shopify GA4 is Web Pixel via Customer Events API — Shopify's sandboxed pixel system that runs GA4 in isolation, supports the Customer Privacy API for consent, fires standard e-commerce events automatically, and works on all Shopify plans (including Basic)…
Item Array Integrity: What Stops Items Reporting in GA4 (2026)
Items fail to appear in GA4 e-commerce reports when the items array is missing, malformed, or inconsistent across the funnel. Eight common bugs: (1) missing item_id (the only required field — items without it are dropped), (2) item_id mismatched between view_item and purchase (same product reports as different items)…
Audit GA4 Cohort Exploration: Retention Analysis That Product Teams Actually Use before revenue reporting drifts further
Run a free GA4 audit to catch purchase, refund, item-array, and attribution issues before they distort ecommerce decision-making.