ROAS often looks strong at low spend and then drops when teams scale user acquisition. This guide explains why that happens, how to diagnose whether the issue is in the funnel, the product, the audience, or the tech stack, and what teams should do next.

Scaling paid growth is one of the clearest moments when theory meets reality. At low spend, user acquisition often looks clean. The economics work. The audience responds. ROAS looks healthy. Then spend increases, and suddenly performance drops. Teams that felt confident at one level of investment find themselves asking a very different question: what actually broke?

The answer is usually not a single metric.

As Sven Jürgens says in the second episode of Ask an Analyst, “One metric usually is not telling the whole truth.” That is especially true for ROAS. A collapse in return on ad spend can be caused by changes in audience quality, creative performance, onboarding, conversion, monetization, pricing, or even technical issues that have nothing to do with ads at all. Watch the full episode:

That is why diagnosing ROAS decline requires a full-funnel view.

What ROAS collapse really means

ROAS is revenue divided by spend. On paper, that sounds straightforward. In practice, it compresses many different systems into one number.

Sven breaks that down simply: revenue is ultimately a function of quantity and price. If spend rises and revenue fails to keep up, then something inside the system stopped scaling the way the team expected.

That failure could happen at the top of the funnel. Maybe CPIs rise. Maybe ads fatigue. Maybe frequency climbs. Maybe the campaign is reaching colder users. But it could also happen further down. Maybe onboarding is weaker for broader audiences. Maybe conversion falls. Maybe monetization drops. Maybe a pricing test or store experiment is underperforming.

That is why the first diagnostic step is not to stare at ROAS harder. It is to unpack what sits underneath it.

Start with the full funnel, not a single metric

The clearest recurring theme in the discussion is that teams need to stop treating ROAS as a self-contained answer.

Sven’s advice is direct: “Look at the full funnel.”

That means working from impression to install, install to activation, activation to retention, and retention to monetization. If you only look at ROAS, you may miss the part of the system that actually changed.

For example:

  • if ad frequency rises, creative performance may be weakening
  • if installs still look healthy but retention falls, the issue may be onboarding or player quality
  • if retention holds but monetization drops, the issue may be in pricing, paywalls, or product experience
  • if the drop happens suddenly, the problem may not be behavioral at all

A healthy diagnostic process follows the path the player follows. Anything less risks solving the wrong problem.

Why scale is never linear

One of the most important ideas in the conversation is that teams often assume paid growth should scale linearly.

At low spend, that can feel true. The first tests perform well, and the natural instinct is to believe that spending ten times more should roughly return ten times more.

Sven is blunt on this point: “It’s never the case.”

The reason is audience quality. When spend is low, acquisition systems can often find the most obvious high-intent users. These players already understand the genre, the category, or the need. They are the easiest audience to reach and the easiest to convert.

As spend rises, the acquisition system moves outward:

  • from hot users to warm users
  • from warm users to colder users
  • from highly educated users to less familiar ones

That shift changes everything. A game or app that works immediately for a core audience may require more education, more context, or more tailored messaging for broader groups.

In other words, scaling is not just about buying more reach. It is about earning response from less obvious users.

Audience dilution is often the real problem

A large share of post-scale ROAS decline comes down to audience dilution. Sven describes it as the process of moving away from “your very core audience.” Once that happens, conversion quality often changes. Retention may weaken. Monetization may soften. Higher spend begins buying users who need more help understanding the product.

This is why broader scale often demands changes outside media buying alone. It may require:

  • more educational creatives
  • clearer onboarding
  • better expectation-setting
  • different audience angles
  • more contextual messaging

A product that converts easily with expert users may not convert the same way with adjacent audiences. That does not mean growth is impossible. It means the product-market message needs to evolve with the audience.

If ROAS collapses fast, check tech first

Not every ROAS drop is a marketing problem.

One of the strongest practical points in the discussion is that speed matters. When ROAS falls very quickly, in hours, not days or weeks, Sven sees that as a warning sign that something technical may be broken.

Sven's view is clear: “The faster you see something is usually for me an alert that maybe even a tech issue is happening here.”

That could mean:

  • server instability
  • broken event tracking
  • API reporting issues
  • onboarding blockers
  • backend systems failing under scale
  • a deployment problem affecting conversion or play

By contrast, audience dilution and frequency fatigue tend to emerge more gradually. They usually show up over time as the campaign burns through higher-intent users and starts reaching colder ones. So one of the simplest diagnostic questions is also one of the most valuable: Did this drop happen in hours, or did it happen over days and weeks? That timing alone can narrow the search dramatically.

Early numbers are often inflated

Another major trap in scale diagnostics is overconfidence in early performance.Sven points out that early metrics often look better because the first audience is frequently made up of highly engaged community members; people from Discord, Reddit, dev diaries, or pre-existing fan groups who already understand and care about the product.

These users are not representative of the broader market. They are more knowledgeable, more invested, and often more willing to spend.

As he puts it, “Early numbers are always inflated, are always looking better.”

That matters because teams often anchor on those first strong signals and assume broader performance should look similar. When it does not, they interpret the drop as failure rather than normalization. Sven suggests that early retention and conversion figures may need to be discounted by something like 20–30%, depending on community size and context. The exact number is not universal, but the logic is sound: community-first users are not the same as scaled paid users.

The Rubik’s cube approach to scaling audience

One of the more useful metaphors in the discussion is Sven’s “Rubik’s cube” framework.

The idea is simple: as you scale, the product may stay the same, but the angle through which you present it needs to change.

In his words: “It is still one product. It is still one onboarding. It is still one game. But the lenses you look through and the angles you have in your ad and communication are always a bit different.”

That means scaling often requires teams to change how they speak to the audience, not just how much they spend.

Examples might include:

  • shifting from feature-first messaging to educational messaging
  • moving from expert-user assumptions to beginner framing
  • speaking to adjacent motivations rather than the core one
  • adapting the same product for different geographies or cultures
  • finding emotional angles that matter to new audience segments

The point is not to invent a different product. It is to present the same product through different entry points that make sense to broader audiences.

Research is not optional

One of the most practical themes in the episode is that research does not become less important as companies scale. It becomes more important.

Sven repeatedly comes back to research as the starting point for diagnosing broken performance: why the store is not converting, why the ads are not converting, why the paywall is not converting, why players are not progressing.

That research can take many forms:

  • funnel analysis
  • audience research
  • creative analysis
  • review mining
  • competitor ad observation
  • community feedback
  • direct player interviews

He also makes an “old school” point that still holds: “Talk to people.”

Data can show that something is happening. It often cannot fully explain why people feel the way they do. Interviews, observation, and qualitative feedback are still essential when teams are trying to understand the motivations underneath player behavior.

Do not scale geographies by copy-pasting

A related mistake is assuming that what works in one market should work the same way elsewhere. Sven argues that teams often underestimate how much culture, behavior, and expectations differ between countries. A creative that performs in one region may underperform in another, even if translated correctly. That is why he recommends a methodical, step-by-step approach to scale. Rather than rolling out everywhere at once and hoping the system generalizes, teams should expand carefully, learn from each market, and adapt. This is especially relevant for mid-sized studios. Large companies may have the resources to globalize faster, but most teams benefit more from disciplined iteration than from immediate worldwide expansion.

Channel decay is normal, not a surprise

Another crucial takeaway is that channel decay is not a sign that something has gone uniquely wrong. It is a normal part of scaling paid growth.

Sven calls it “the brutal answer”: every channel will drop in performance at some point.

That is because:

  • audiences saturate
  • frequency increases
  • high-intent users get exhausted
  • creatives fatigue
  • broader reach brings weaker fit

The mistake many teams make is abandoning channels too quickly. Sven argues that big platforms like Meta and TikTok can still contain the right audience, but teams need to evolve the message, not just chase a new channel.

In other words, channel decay should trigger iteration before abandonment.

Creative scale now depends on velocity

The conversation also touches on a major change in performance marketing: creative production has accelerated dramatically. With AI and easier production workflows, more teams can generate more creatives than ever before. That changes the economics of testing. What mattered before still matters — hooks, messaging, audience fit — but now velocity plays a bigger role.

Sven notes that strong creatives are more obvious when they work, but also that win rates are low. A good benchmark, in his experience, may be around 10% of launched creatives performing meaningfully well. That means teams need systems, not isolated hero assets. One polished trailer is not enough if it fails. Sustainable UA requires a repeatable way to generate, test, and refine creative ideas at scale.

The biggest mistake: treating ROAS as the whole answer

When asked what teams should stop doing, Sven’s answer comes back to the root of the problem: do not rely on one metric alone. ROAS is useful, but it is incomplete. It can tell you that performance changed. It cannot tell you exactly why. That is why one of the most important lines in the discussion is also one of the simplest: “ROAS is a great one. It’s a super insightful value. But it’s only telling you so much.”

A drop in ROAS could reflect:

  • bad users
  • weak onboarding
  • broken monetization
  • creative fatigue
  • frequency increase
  • store issues
  • tech failures
  • tracking issues
  • pricing changes
  • economic imbalance in the game

A team that only watches ROAS will always be late to understanding the real issue.

What teams should do when ROAS starts to fall

The most useful operating summary from the episode looks like this:

  1. First, map the full funnel and identify where performance has changed.
  2. Second, distinguish between a fast drop and a gradual one.
  3. Third, pressure-test whether the issue is technical, product-related, audience-related, or channel-related.
  4. Fourth, research the player and audience deeply enough to understand what changed.
  5. Fifth, scale methodically rather than assuming linearity or copy-pasting success across markets.

The core principle is simple: diagnose before reacting. If you scale spend faster than you scale understanding, ROAS decline is almost inevitable.

Final takeaway

ROAS collapse is rarely a mystery. But it is often misdiagnosed. Teams tend to blame the number they can see first: CPI, spend, platform, or the campaign itself. In reality, the issue may be in the product, the player experience, the audience mix, the market, or the infrastructure supporting growth. The best response is not panic. It is disciplined analysis.

Sven’s recommendation is both simple and demanding: “Look at everything like your whole funnel.”

That is the right mindset for scaled growth. Because when performance changes, the answer is almost never hiding in one metric alone.

FAQ

Why does ROAS often collapse when scaling paid growth?

Because scale is rarely linear. As spend rises, campaigns often move beyond the highest-intent audience into broader and colder groups, which can weaken retention, conversion, and monetization.

What should I check first if ROAS drops quickly?

If the drop happens in hours, start by checking technical issues such as server problems, broken reporting, event tracking, or onboarding blockers.

What if ROAS declines slowly over time?

A slower decline usually points more toward audience dilution, creative fatigue, rising frequency, or weaker conversion from broader targeting.

Is ROAS enough to diagnose growth problems?

No. ROAS is useful, but it does not explain the whole system. Teams need to look across the full funnel, from ad performance to onboarding, retention, and monetization.

Why do early UA numbers often look better than later ones?

Because early users often come from highly engaged communities or core fans who already understand and like the product. They are usually not representative of the broader paid market.

What is a good way to scale after early ROAS success?

Scale step by step. Test new geographies carefully, adapt messaging for broader audiences, keep researching player motivations, and avoid assuming that what worked at low spend will work unchanged at higher spend.