HOME

Published Oct 8, 2025 · 8 min read · by AI Viral Test Lab

How to Build a Credit System for Viral AI Labs

Viral AI tests surge in unpredictable waves. You need a throttle that keeps servers alive without killing excitement. Here is how we built a device-based credit engine that feels playful, supports screenshots, and keeps fraud in check.

1. Define the daily allowance in narrative terms

People accept limits when the story is consistent. At AI Viral Test Lab we settled on five free credits per UTC day because it matches our internal cost model and sounds like a snack-sized challenge: try five names, then tell your friends. Framing matters. We do not say "You have consumed 5 API calls". We say "5 experiments drop at midnight". The wording makes the limit feel like a gift, not a punishment.

Before you pick a number, map the cost of each run, the average length of a session, and the promise you make in marketing. If you highlight "Try a chaos meter with your friends" you must let a single user test multiple friends. That is why we do not go below five credits even when traffic spikes.

2. Track by client_id first, IP second

Traditional per-IP counters block roommates and co-workers. Instead we generate a `client_id` in usage.js and store it in localStorage. Redis still captures global usage for abuse monitoring, but balance data lives in SQLite so we can attach metadata like share bonuses and pack purchases. IP only acts as a fallback when storage is unavailable.

Implementation checklist:

3. Turn sharing into a refill ritual

Giving +3 credits per genuine share feels like a cheat code and reliably multiplies reach. The key is to make the reward instant and visible. When someone taps the Share button in a lab the modal asks for a platform (TikTok, Instagram, Twitter, Discord, Copy Link). Once they confirm we call `/api/share`, log the event with UTM keywords, and display the new balance inside the credit indicator.

To keep the system honest we cap bonuses at +9 per day and attach a share_id to the log. That lets us analyze which captions or prompts create the biggest loop. Sometimes the best performing CTA is as simple as "Show your Toxic Score and tag the next person".

4. Protect the budget without punishing fans

Three lines of defense keep the lab from melting down:

  1. Global hourly limit: Redis tallies successful generations per IP per hour. When the number crosses our threshold we refund the credit and ask that user to wait.
  2. Anti-bot challenge: After sustained spikes we return a 429 that triggers a simple math quiz in the client. Passing the quiz sets a verification token so legitimate users keep playing.
  3. Screenshot delay: We add a 1.3 second animation before the result renders. It gives the LLM time to respond and reinforces the ritual of "charging" the card.

These controls are transparent in the UI. Visitors understand why a request was blocked, and we immediately refund the credit when a guardrail activates.

5. Prepare the system for paid packs

Even while we wait for the next payment provider approval we built the plumbing for `/api/credits/checkout` and `/api/credits/purchase`. The checkout route validates pack IDs, generates a hosted payment URL, and records the pending order. The webhook route verifies signatures, checks for duplicate order IDs, and increments the buyer's balance.

Two practical tips:

6. Keep messaging consistent everywhere

Nothing kills trust faster than mismatched copy. The same credit rules appear on our Pricing page, inside each lab, and on About. When we adjusted the daily allowance from ten to five, we updated every mention, the blog, and the README. SEO posts like this one double as documentation; if a journalist or creator needs a quote, they see the latest policy right here.

7. Audit with real user journeys

The last step is running scenario tests:

Running through journeys like these surfaces edge cases faster than synthetic load tests.