What should mobile game teams track after launch? This guide covers the essential events, the KPIs that matter most, and how to use retention, ARPDAU, cohorts, and LiveOps analytics to improve game performance over time.

Once a mobile game launches, analytics gets harder, not easier. Teams are suddenly looking at retention, monetization, engagement, LiveOps, and acquisition all at once. The challenge is no longer whether data exists. It is knowing what deserves attention.

In the first episode of Ask an Analyst, the GameAnalytics series where we answer real studios' questions, Russell Owens argues that once a game is live, teams need to simplify their focus. As he puts it, “Once your game is live, the games team really should only be focused on two KPIs.” Those are retention and ARPDAU. Watch the full episode here:

That idea is useful precisely because it cuts through post-launch noise. A live game can produce endless dashboards, but only a few metrics consistently help teams make better product decisions.

What changes after launch

Before worldwide release, teams usually evaluate a game in stages. They start with technical quality, then move into retention, then monetization. After launch, all of those areas matter at the same time. That is why post-launch analytics needs prioritization. Without it, teams end up monitoring everything and acting on nothing. The key shift is from validation to operation. Pre-launch analytics asks whether the game is ready. Post-launch analytics asks how to improve a live product every day.

The minimum event setup every team needs

A post-launch analytics stack does not need to start with hundreds of events. It needs a small, reliable foundation. Russell Owens highlights a minimal event set that supports the business of running a mobile game:

  • Session begin, which powers retention and session cadence analysis
  • Level complete, which shows progression behavior
  • Tutorial complete, which helps evaluate first-time user experience
  • Purchase, which captures in-app monetization
  • Ad view, which captures ad monetization where relevant

He describes session begin as fundamental because it is what allows teams to calculate retention in the first place. From there, a team can expand event coverage based on genre and product needs. But this minimal structure is enough to answer the first big post-launch question: are players coming in, coming back, progressing, and spending?

Why the FTUE matters more than most teams think

One of the strongest points in the discussion is the importance of the first-time user experience, or FTUE. If retention is weak, the first place to look is not necessarily your economy or your LiveOps cadence. It is what happens when a new user first enters the game.

Russell puts it simply: “Look at what happens when a new user first interacts with your app, your game. Where do they drop off?”

That is a much more useful question than staring at day 1 retention in isolation. If a player does not return, what happened on day 0? What was the last thing they did? Did they hit confusion, boredom, friction, or a broken state? For mobile game teams, FTUE analysis should not stop after soft launch. It should be revisited after every meaningful update. A game that gets better for long-term players can easily become harder for new ones.

The two KPIs that matter most: retention and ARPDAU

Owens makes a strong case that post-launch teams should focus primarily on retention and ARPDAU. Retention matters because without it, nothing else works. His phrasing is direct: “You can only monetize a user when they play.”

ARPDAU matters because it answers the next question: are you monetizing the players who are already here? These two together create a useful operating model:

  • Retention tells you if the experience is strong enough to bring players back
  • ARPDAU tells you if the active player base is generating enough value each day

This is also why Owens argues that game teams should not fixate on DAU or topline revenue. Those are important business metrics, but they are often the outcome of many systems working together, including growth and UA. Game teams have more direct influence over retention and monetization quality.

Why retention is usually the bigger lever

A key theme in the discussion is that retention tends to matter more in the long run than short-term monetization gains. Owens explains that players who continue playing are far more likely to keep playing. In his words, “Playing begets playing.”

That is why day 7 retention is such an important milestone. Players who return on day 7 are often materially different from players who churn early. They are more likely to stick around, spend, and respond to LiveOps content over time. This is also why improving retention often creates larger long-term gains than trying to optimize ARPDAU in the short term. A game can push monetization harder, but if that damages the player experience too much, the long-term value of the player base suffers.

Cohorts and segments are not the same thing

The conversation also makes a useful distinction between cohorts and segments.

  • A cohort usually refers to a group of players with a shared time-based or acquisition-based trait, such as players who installed on the same date or came from the same campaign. Cohorts are especially useful for user acquisition analysis and lifetime value modeling.
  • A segment, on the other hand, is usually behavior-based. It helps teams understand different types of players, such as players who spend quickly, players who churn early, or players who engage a lot but never monetize.

This distinction matters because post-launch product decisions often need both lenses. Cohorts help explain where players came from and what they are worth. Segments help explain how different player types behave inside the game.

Why many features fail to move the numbers

One of the most practical lines in the transcript is: “Most features fail.”

That does not mean teams should stop shipping features. It means they should be realistic about impact. Many new features and content drops feel important internally but do not meaningfully improve retention or monetization. Owens notes that it is often difficult to improve core KPIs just by adding a feature.

The right response is not cynicism. It is better measurement. If a team wants to know whether content or a feature changed the business, the best test is usually to compare players who saw it against those who did not and evaluate the downstream effect over time. And teams need to be prepared for a very common result: no meaningful difference.

What mobile game teams should actually focus on day to day

Once a mobile game is healthy post-launch, the team’s attention usually shifts toward LiveOps. That means the daily questions become more operational:

  • when should events run
  • how difficult should they be
  • what rewards are compelling enough
  • whether leaderboards, guilds, or competition are helping
  • how event cadence changes player return behavior

Live games often succeed or fail not because the core loop is broken, but because the long-term engagement structure is too weak, too repetitive, or too overwhelming. The job of analytics here is not just to report outcomes. It is to help the team decide what to test, what to repeat, and what to change.

How to interpret ARPDAU swings

Another practical issue raised in the discussion is ARPDAU volatility. Daily ARPDAU can move around for many reasons, especially when sample sizes are small. Because it is an average, some fluctuation is normal.

The important distinction is between expected noise and behavior-driven change. Owens recommends using standard statistical tools such as variance and confidence intervals to tell the difference. In other words, not every ARPDAU movement deserves a product reaction. Teams need to know when something is actually outside the expected range.

Strong D1 but weak D7: what it usually means

One common pattern after launch is good day 1 retention but disappointing day 7 retention. Owens advises first checking whether day 7 is truly weak. As a rough directional benchmark, he mentions a 40 / 20 / 10 pattern for day 1, day 7, and day 30.

If day 7 is genuinely underperforming, the issue is usually not that players fail to start. It is that they do not have enough reason to return consistently. That is where return mechanics matter: push notifications, daily login rewards, habit loops, and lightweight recurring incentives. A strong LiveOps game needs reasons to come back before a player has fully formed a habit.

Final takeaway

The clearest message from the conversation is that post-launch analytics should simplify decision-making, not complicate it. Track the minimum viable event set. Revisit the FTUE constantly. Use retention and ARPDAU as primary operating KPIs. Distinguish between cohorts and behavioral segments. Be realistic about feature impact. And remember that a live game has to serve both new users and long-term players at the same time.

Owens summarizes the priority well: “Really focus on retention and really just by making the best game experience possible.”

That is the heart of effective post-launch analytics. Not just measuring the game, but making it better.

FAQ

What should mobile games track after launch?

At minimum, teams should track session begin, level complete, tutorial complete, purchase, and ad view.

What are the most important KPIs after launch?

For game teams, retention and ARPDAU are the two most important KPIs to monitor regularly.

Why is retention so important in free-to-play games?

Because monetization depends on players continuing to play. If players do not return, long-term value drops.

What is the difference between a cohort and a segment?

A cohort is usually time-based or acquisition-based. A segment is usually behavior-based.

Why do some features show no impact in the data?

Because many features do not materially change the KPIs teams care about. Their effect may be smaller than expected or not statistically meaningful.