Launch Offer2 free audits with all 229 checks. No credit card required.Start free audit

GA4 Explorations: The 7 Exploration Types and When to Use Each (2026)

Intermediate

What are the 7 GA4 Exploration types?

GA4 Explorations has seven analysis types: (1) Free Form — flexible pivot-table style analysis for any dimension/metric combination; (2) Funnel Exploration — step-by-step conversion path analysis with open and closed funnels; (3) Path Exploration — forward and backward user journey mapping from any event; (4) Segment Overlap — Venn diagram comparison of up to three user segments; (5) User Explorer — individual user-level event stream (requires User-ID or device ID); (6) Cohort Exploration — retention analysis by user acquisition date; (7) User Lifetime — LTV and predicted revenue analysis across a user's full history. Critical limit: all Explorations cap at 10 million events per query, apply sampling on large date ranges, and data is limited to the property's retention window (default 2 months for event data, 14 months for aggregated data — extending to 14 months event retention requires a manual setting change).

The 7 exploration types in detail

1. Free Form Exploration

What it answers: Any question that requires cross-dimensioning two or more fields — "sessions by device AND source AND landing page" or "events by user type AND country."

The pivot table of GA4. You drag dimensions to rows and columns, metrics to values. Supports table, donut chart, line chart, scatter plot, bar chart, and geo chart visualisations. Up to 10 dimensions and 10 metrics per exploration.

Best for:

  • Ad hoc analysis that doesn't fit a standard report
  • Cross-tabbing dimensions the standard reports don't combine (event name × landing page × device)
  • Spot-checking data quality (event counts by page × event name)

Key limits:

  • Sampling applies above approximately 500,000 sessions over the selected date range
  • Maximum 50,000 rows in the table view (export to Google Sheets for more)
  • Data available up to the property's event data retention window

Common mistake: Setting date ranges beyond 90 days and not noticing the sampling indicator. Sampled explorations can undercount by 20–40%. Check the sampling badge (top right of the exploration) before drawing conclusions.

2. Funnel Exploration

What it answers: What percentage of users complete each step of a defined sequence, and where do they drop off?

Open vs closed funnels:

  • Open funnel: Users can enter at any step, not just step 1. Use for micro-conversion flows where users might arrive mid-funnel via a direct link.
  • Closed funnel: Users must complete step 1 before step 2 is counted. Use for checkout funnels and sequential processes where skipping steps is not meaningful.

Best for:

  • Checkout conversion rate by step (view cart → begin checkout → add payment → purchase)
  • Lead generation funnel (landing page view → form view → form submit → thank you page)
  • Onboarding flow (sign up → profile complete → first key action)

Key limits:

  • Maximum 10 steps per funnel
  • Cannot use custom SQL conditions per step (only event name + parameter conditions)
  • Elapsed time between steps can be measured but not filtered on

Segment breakdown: Apply up to two user segments to see funnel performance by segment side-by-side — e.g., new vs returning users through the same checkout funnel.

3. Path Exploration

What it answers: After [event X], what do users do next? Before [event X], where did users come from?

Forward paths start from a specified event and show the sequence of events that follow. Backward paths end at a specified event and show what preceded it.

Best for:

  • Understanding what users do after viewing a key product page
  • Diagnosing where users go after a failed checkout step
  • Identifying unexpected navigation patterns (users who view the FAQ page — where did they come from, what did they do next?)

Key limits:

  • Paths limited to 5 steps forward or backward by default (expandable to 10)
  • High-event-count properties produce very wide trees that become difficult to read
  • Session-scoped vs event-scoped path exploration behave differently — session-scoped resets at session boundaries

Common mistake: Running a path exploration across all events without filtering to a specific starting event. The resulting tree is too wide to interpret. Always start from a specific meaningful event.

4. Segment Overlap

What it answers: How do your user segments overlap? What percentage of users are in multiple segments simultaneously?

Displays a Venn diagram for up to three user segments with intersection counts and percentages.

Best for:

Want to see which hidden implementation gaps are affecting your GA4 data quality?

  • Understanding how much your paid acquisition audience overlaps with your email audience
  • Identifying the "power user" segment that is in multiple high-value segments simultaneously
  • Sizing the audience that converted AND was acquired via a specific channel

Key limits:

  • Maximum 3 segments simultaneously
  • User counts, not session counts
  • Sampling applies on large user bases

5. User Explorer

What it answers: What did a specific user do on this site, event by event?

Shows an individual user's complete event stream: every event they triggered, in chronological order, with timestamps, device information, and parameter values.

Best for:

  • QA testing your own tracking (check your specific user ID shows correct events)
  • Investigating anomalous high-value users (one user with 500 purchase events — is it real?)
  • Customer service context (with User-ID, find a specific customer's journey)

Key limits:

  • Requires User-ID for meaningful use with known customers; without it, shows device-pseudonymous IDs
  • Limited to 500 rows per user's event history in the UI
  • Data available within the event data retention window only

Privacy consideration: User Explorer is the most privacy-sensitive exploration. Accessing it for real users requires appropriate authorisation and GDPR/UK GDPR basis. Build internal policies around who can access user-level data.

6. Cohort Exploration

What it answers: For users acquired in a specific week or month, how many returned in subsequent weeks/months?

Displays a cohort retention grid: acquisition cohort (rows) × weeks/months since acquisition (columns) × retention rate (values).

Best for:

  • Measuring the long-term retention impact of a product change
  • Comparing retention rates for users acquired from different channels (paid vs organic cohorts)
  • Identifying the "activation window" — how quickly do newly acquired users show retention-predicting behaviour?

Key limits:

  • Maximum 12 cohort periods by default
  • Cohort definition is acquisition-based (first visit/session) not arbitrary event-based
  • Requires sufficient cohort sizes for statistical meaningfulness — cohorts under 100 users produce unstable percentages

Cohort metric options: Retention (% returning), Total users, Session counts per cohort, or custom key event counts. Key event per cohort is the most useful for product teams: "of users acquired in January, how many completed [key activation event] in week 2?"

7. User Lifetime

What it answers: What is the cumulative revenue and engagement across a user's full lifetime with your property?

Shows LTV-style metrics: total revenue per user, total sessions per user, predicted revenue (requires GA4 predictive metrics to be active — needs 1,000+ purchasers and 1,000+ non-purchasers in the last 28 days).

Best for:

  • Identifying which acquisition channels produce the highest-LTV users (not just most first-session conversions)
  • Building the business case for top-of-funnel investment with long-term payback data
  • Sizing the predicted revenue opportunity from existing user segments

Key limits:

  • Predicted revenue requires predictive metrics activation (minimum user volumes)
  • Data limited to the property's event retention window — for most properties this is 2 months unless extended
  • Does not support cross-device stitching unless User-ID is implemented

The five most common Explorations mistakes

Mistake 1 — Ignoring the sampling indicator A yellow badge in the top-right corner of any exploration means the data is sampled. Sampling is silent — you won't get a warning, just slightly wrong numbers. Always check before sharing exploration results with stakeholders.

Mistake 2 — Using event data retention defaults GA4's default event data retention is 2 months. Explorations only go back as far as the event retention window. If you want 14-month Explorations, change the setting immediately: Admin → Data Settings → Data Retention → Event data retention → 14 months. You cannot retroactively recover data beyond your retention window.

Mistake 3 — Confusing user vs session scope Many Explorations dimensions can be either user-scoped or session-scoped. "Sessions" reports per-visit data; "Users" reports per-person data. Mixing a session-scoped dimension with a user metric (or vice versa) produces confusing inflation. Check the scope icon next to each dimension in the Explorations sidebar.

Mistake 4 — Building funnels without testing open vs closed The conversion rate difference between open and closed funnel for a checkout flow can be 20–40%. Always confirm which type fits your question. Closed funnel is correct for sequential processes; open funnel inflates conversion rates by allowing mid-funnel entries.

Mistake 5 — Sharing explorations without verifying segment definitions Explorations use segments that can be edited by any editor of the property. If you share an exploration and someone edits the underlying segment, the exploration silently shows different data. For stakeholder-facing explorations, convert them to standard Looker Studio reports (which have more stable configurations) before sharing broadly.

FAQ: GA4 Explorations: The 7 Exploration Types and When to Use Each

What should a team validate first when ga4 explorations: the 7 exploration types and when to use each appears?

Reproduce the problem in the live implementation, isolate whether it is scoped to one report or flow, and compare it against at least one secondary source before changing the setup.

How do I know whether the fix actually worked?

You need before-and-after evidence in the browser and in the downstream report. A clean-looking dashboard without validation is not enough.

When should this become a full GA4 audit instead of a quick fix?

If the issue touches attribution, consent, revenue, campaign quality, or data trust for more than one workflow, it is usually safer to audit the surrounding implementation than patch only the visible symptom.

Run a GA4 audit before ga4 explorations: the 7 exploration types and when to use each spreads into reporting decisions

Use GA4 Audits to surface implementation gaps, broken signals, and the next fixes to prioritize before the issue becomes harder to trust or explain.

These findings come from auditing thousands of GA4 properties. See how your property compares

GA4 Audits Team

GA4 Audits Team

Analytics Engineering

Specialising in GA4 architecture, consent mode implementation, and multi-layer audit frameworks.

Share