Growth Analytics Stack for AI Viral Labs
When you run 25 labs at once, gut feel is not enough. This guide covers the event schema, dashboards, and attribution loops we rely on to keep growth experiments honest.
Event schema that mirrors the player journey
We log five primary events: view_lab, start_run, complete_run, share_click, and checkout_intent. Each carries metadata (tool slug, client ID, credit balance, referral channel, model in use). Because labs share the same analytics schema, we can compare conversion apples-to-apples and spot when a single slug is dragging blended results down.
Credit-aware funnels
Traditional funnels stop after un complete. ?Instead, we extend ours through credits. Example: user sees a lab, runs twice, hits zero credits, taps op up, ?but abandons checkout. That is counted as credit friction, not product-market fit failure. We segment funnels by credit balance so we know if users churn because the lab is done or because the wallet is empty.
Attribution for chaos
Half our traffic comes from screenshots hopping across platforms. We attach a short share_id to every share modal and stuff it inside the URL, QR code, and ShareCopy. When a new user arrives, we log the share_id plus resolved platform. Over time we see which creators or fans drive actual runs, not just clicks. The analytics team gets a weekly report ranking share IDs by credits burned.
Dashboard hygiene
We maintain separate dashboards for campaign view (which labs are hot), cohort view (how many users return), and monetization (credit purchases, ARPPU, refund rate). Each dashboard is intentionally boring: same colors, same layout, so on-call folks can read them at 3 a.m. If a metric needs context, we attach a Loom video explaining how to interpret it.
Layer qualitative insight on top
Numbers tell you what, players tell you why. Embed a one-question poll on the results page ( id this lab hit the vibe? ?. Tag answers with the tool slug and export them weekly. When you see a lab with strong run counts but low satisfaction, you know to revisit prompts or share copy.
Budget guardrails
When multiple LLM providers are available, cost can spiral. We log per-run provider cost and monitor it next to credit burn. If a lab switches from OpenAI to Claude, the dashboard highlights the delta so finance does not panic when COGS shifts. Alerts fire if average cost per run exceeds a threshold for more than an hour.
Experiment pipeline
Every growth experiment gets a JIRA ticket with hypothesis, metrics, and kill criteria. Analytics hooks into that ticket to set up temporary dashboards. When the experiment ends, we archive the view but keep screenshots and queries in a repository so we can re-run or audit later.
Share learnings internally
Dashboards only matter if the team reads them. We host a 20-minute ab Weather ?stand-up twice a week: top-performing labs, credit anomalies, creator shout-outs, and upcoming drops. Everyone leaves with a single action item tied to the numbers, which keeps analytics from living in a silo.