We all believe we make rational decision based on quantitative data. In reality, there are many biases that we are not even aware of. Want to know how data driven you really are and find opportunities to improve? Here is a fun quiz we made just for you.
A/B testing has been an integral part of marketer toolbox for a good reason – it takes a great deal of the guess work away from marketing. In online and mobile companies it also became a popular tool for product managers. Every time a new version is released, why not a/b test against the existing version and make sure nothing got broken. In mobile app monetization, however, this tool is not available.
Why ad based app monetization is so hard to A/B test
The core requirement for A/B testing is to be able split your users into two groups, give each group a different experience and measure the performance of each one so you can compare it later. There are a number of tools who can facilitate the split for you including Google Staged Rollout. If you are measuring IAP monetization it’s easy enough to associate purchases to the users who made them and then sum the revenue in Group A and Group B. In ad monetization however, it’s impossible to associate ad revenue to individual users. The ad partners mostly don’t report the revenue in this level of granularity.
Method 1 – interval testing
One alternative that companies have been using is interval testing. In this method, the app publisher will have one version of the app already published and will roll out a version with the new feature to all the devices. To make sure all the users received the new version publishers will normally use force update method that gives the user no choice. The impact of the new feature will be measured by comparing the results over two different time intervals. For example, Week1 might have contained version 1 and week 2 might contain version 2 so a publisher can compare version 1 vs. version 2 by comparing the results in different date ranges.
- Very simple to implement – no engineering effort
- Highly inaacurate and subject to seasonality
- Force update method has a negative impact on retention
Method 2 – using placements or different app keys
This is a pretty clever workaround for the problem. Most ad providers has a concept of placements. In some cases, they are called zones or areas but all 3 have the same use – they are planned so you can identify different areas in your app where ads are shown for reporting and optimization purposes. The way to use this for A/B testing is to create a zone A and Zone B and then report Zone B for users that received the new feature while reporting Zone A for the control group. If you are already using the zones feature for it’s original purpose, you might already have zone 1, 2, 3, 4 and 5 so you would create 1a, 1b, 2a, 2b, ….
Of course, if you are using multiple ad-networks you would need to repeat this set up for every ad-network and after the test period aggregate the results back to conclude your A/B test.
A variation of this method is to create a new app in your ad-network configuration screen. This means you will have 2 app keys and can implement one app key in group A and the other app key in group B.
- More accurate compared to other methods
- The effort for implementing a single test is very high and requires engineering effort
- Will be hard to foster a culture of testing and being data driven
Method 3 – counting Impressions
This method requires some engineering effort to set up – every time an impression is served the publisher reports an event to his own servers. In addition, the publishers sets up a daily routine that queries the reporting API of each ad-network and extracts the eCPM per country. This information is than merged in the publisher database so that for every user the impression count for every ad-network is multiplied by the daily average eCPM of that ad-network in that country. The result is the (highly inaccurate estimation of the) ad revenue of that user in that day. Once you have this system in place, you can implement A/B tests, split the users to testing groups and than get the average revenue per user in each group.
- After the initial set up there is no engineering effort per test
- Settting this system up is complex and requires a big engineering effort
- Highly inaacurate – it uses average eCPM while eCPM variance is very high
- Can lead to wrong decisions
Method 4 – leveraging true eCPM
This method leverages multiple data sources to triangulate the eCPM of every single impression. It requires significant engineering effort or a 3rd party tool like SOOMLA TRACEBACK. Once the integration of the data to the company database is completed, publishers can implement a/b tests and can get the results directly to their own BI or view them through the dashboard of the 3rd party tool. Implementing A/B tests becomes easy and a testing and optimization culture can be established.
- The most accurate method
- Low effort for testing allows for establishing a testing culture
- Improvement in revenue can be in millions of dollars
- The 3rd party tool can be expensive but there is usually very quick ROI
In previous blog posts I posted 6 different LTV calculators and received a lot of feedback about the LTV models. Turns out game publishers found them super useful for calculating the LTV of their game. It was great to hear the positive feedback which also led to a lot of conversations about how people are calculating their LTV. Here are some of the learnings I can share.
Specific LTV model is always better than generic one
All our LTV calculators can’t be nearly as accurate as the ones you can build in-house. If you have the money to hire a data sceintist or at least contract one to build a formula for you after you have gethered some data, you will end up with a more accurate model. The reason is simple, in predictive modeling, the more signals you have the more accurate the model will be. All our calculators use retention and arpdau because they need to be widely applicable. However, there are a lot more signals you can feed to a specific model: tutorial completion, level progress, soft currency engagement, challenges completed, … Factoring such signals would give you a better prediction model. Our generic calculators’ main purpose is to get you started, give you a framework to think about LTV prediction and help you do some basic modeling if you are on a budget.
Simplified spreadsheet modeling
Our original spreadsheet model was taking in 31 points of data. However, after talking with readers I learned that most of you only track 4 retention data points and 1 arpdau point. This is why I created a version that is simpler on the input side. Another feedback I received is that you want more outputs: Day 60, Day 90, Day 180 and Day 365 LTV. Here is the new calculator based on all that feedback.
- Day1 retention
- Day7 retention
- Day14 retention
- Day30 retention
- Day60 LTV
- Day90 LTV
- Day180 LTV
- Day365 LTV
This spreadsheet is the same one from the retention modeling we presented in this post but with a few tweaks.
If you want to measure the ads LTV in addition to IAP LTV you should check out SOOMLA Traceback – Ad LTV as a Service.
I recently came across a fantastic post by Jeff Gurian. Those of you who don’t know Jeff, he is the Director of Marketing at Kongregate. In his post he brings up a super important point – you can double your traffic by Tracing the Ad LTV or “counting the ads” in the language of the article.
Doubling your traffic only takes a 25% increase in LTV
According to Kongregate’s experience with user acquisition, Jeff explains, the correlation between how much traffic you can get and the bids you place is not linear but rather a power function. “There is always a tipping point where your traffic will increase exponentially relative to the increase in your bid.” says Jeff.
The chart in the post does a good job in explaining this point:
In this example – acquiring traffic with bids of $12.5 as opposed to $10 will allow you to get twice the amount of traffic. In other words, a bid increase of 25% transatles to a volume increase of 100%.
Tracing Ad LTV allows more room in your CPI bids
Not all games have ads but the ones that have added in-game advertising are seeing between 10% to 80% of their revenue coming from ads. 25% is a typical scenario in many games and is also close to the ratio reported by public companies such as Glu and Zynga. The example given in the article (see image below) is showing that tracing Ad LTV can modify your ARPU / LTV analysis by 25%-30%. As we know, higher LTV means that we can afford to pay higher CPI which leads to twice as much traffic per the explanation above.
Let SOOMLA do the work and get you the accurate Ad LTV
Many companies skip the Ad LTV since the process for calculating it is often complicated, time consuming and in many cases it is not accurate enough. Their claim is that none of this matters if you are miscounting your Ad LTV. Counting impressions can lead to significant errors in LTV calculations which means your ROI analysis can be off and end up losing money for the company.
Fortunately enough, SOOMLA has developed a solution that automates the Ad LTV calculation and we do that with much greater accuracy so now you can enjoy the benefits of Traceback and double your traffic without worrying about accuracy or extra development effort.
To save valuable resources and ensure you are getting the Ad LTV correct for every cohort you need a specialized system like SOOMLA TRACEBACK. The platform traces the ad revenue and sends it to your attribution partner or in-house BI.
Recently I became aware of game publishers that implemented an in-house solution for Ad LTV tracing but were doing a huge mistake in how they think about ad revenue. We all know that any LTV calculation has 2 main factors:
If this is how your company calculates Ad LTV you should read the following examples carefully.
Example 1 – The Rewards Collector
- User played during the first month and never came back after.
- Watched 50 rewarded video ad impressions from Vungle – didn’t click or install any ads.
- Average eCPM for this month from Vungle $15
|Ad LTV Based on Impressions||The True Ad LTV||Error|
This type of error could lead the UA teams to a false positive ROI calculations. The UA team thinks the ad spend on this user is ROI positive while it’s actually a losing buy.
Example 2 – The Ad Whale
- User played 5 days during 2 weeks
- Watched 10 interstitial ads from AppNext, clicked on 2 and installed a Match-3 game and a Strategy game
- Average eCPM reported by AppNext for those days – $5
- CPI for that Match-3 game – $2, CPI for the Strategy game – $5
|Ad LTV Based on Impressions||The True Ad LTV||Error|
Here the ROI calculation could be false negative. The UA team will stop buying these type of users since ther reported Ad LTV is $0.05 while it’s actually $1.95 and the buy was actually a good one.
Example 3 – The Retargeted User
- User played 10 days during 1 month
- Watched 20 video ads through Inneractive
- Average CPM reported by Inneractive for those days – $5
- This user was a whale in Game of War and was part of a retargeting campaign so specific CPM bids for that user were high – $80 x 4 ads, $90 x 2 ads, $100x 8 ads, $110 x 2 ads, $120 x 4 ads
|Ad LTV Based on Impressions||The True Ad LTV||Error|
If your company needs to calculate Ad LTV you should try to avoid these costly mistakes. Check out SOOMLA Traceback – Ad LTV as a Service.
Targeting lookalikes of your best users has been the easiest and most effective way spend mobile ad budgets since Facebook first introduced the feature in 2013. Google and Twitter are now also offering similar features and advertisers use them with similar levels of excitement.
What happens if your app is monetizing with ads and not IAP?
Apps that monetize mostly with advertising have a much more complicated job when trying to acquire new users. With ads it’s really hard to figure out who are the best users of your app:
- The users who had the most amount of sessions?
- The users who watched the most amount of ads?
- Users who performed social actions?
- Some other in-app event?
Ideally you would want to create a group of the users who generated the most amount of revenue from advertising in your app and get more users like that.
What are Ad Whales and how to find them?
2% of your users install other apps after viewing ads in your app, these users contribute more than 90% of your ad revenue and can be referred to as “Ad Whales”. This group of users highly resembles the users who make purchases in your app. They are a small group that contribute most of the revenue.
Understanding who your ad whales are could be very useful if you want to spend your advertising budget smartly. You could learn more about the demographics and interests of these users and find more users who share similar characteristics. Better yet – you can let the lookalikes algorithm do this job for you and simply sit back and see your user acquisition campaigns target only users who are similar to the Ad Whales you found.
Tracing your ad revenue is critical for discovering Ad Whales
Unlike In-App Purchases, ad revenue events are not generated inside your app. Finding the Ad Whales is almost impossible unless you have an ad traceback system in place. Traceback is a technology that allows you to trace ad revenue back to the user level. Once you have such a system in place, it’s easy to see who are the users that contribute the most amount of ad revenue.
SOOMLA TRACEBACK is a platform for tracing ad revenue. It allows you to get granular data about each and every user and identify the users who contribute the most ad revenue.
You might have heard some industry experts talk about LTV (life time value) and how important it is. Here are 5 things even some of the experts don’t know about LTV.
1 – Life time value (LTV) is not just for marketing campaigns
You might have heard that you need to know your life time value to do marketing. This is correct but there are actually more reasons. The first reason for for calculating LTV is related to the early design phase. Before you even start making the game you should analyze the potential LTV based on benchmarks from similar games. This important for fundraising as well as for choosing the right games to build. The second reason is even more important. LTV is the one KPI that wraps both ARPDAU and retention and it is highly correlated with long term success. By actively tracking LTV your team will be focused on the right thing when making decisions about the game and monetization techniques.
2 – There is no real life time value – only predicted life time value
Knowing the real LTV requires waiting a very long time – technically you will have to wait a lifetime. You can assume some maximal lifetime – in games 180 days and 365 days are common values for the maximal lifetime. These time frames are just too long to make any meaningful decisions about marketing, product or monetization. Lets say you made a new feature and want to know if you should keep it or not – waiting 180 days for a decision is just impractical. Whenever someone is talking about life time value he means the predicted life time value. That’s the only parameter you can actually work with. To predict yours, you can use one of these 6 LTV calculators
3 – You can succeed with low LTV but not with declining LTV
There are successful games with LTVs as high as $20 or as low as $0.3. You can succeed with low lifetime value and many games have – this is especially true if you are able to constantly increase it. However, you can’t succeed if your LTV is declining – it means that something is fundamentally broken with your game.
4 – Most companies have both CPI > LTV and CPI < LTV
LTV has to be greater than CPI! There are a ton of articles that explain that If your get the basic formula right you are golden. In fact, there was even a conference with that name (http://ltvgtcpi.com). In real life however, you can’t be golden in all segments so the trick is more around finding your golden segments and expanding on them. If your app uses ads, you will need to trace ad LTV per segment using a traceback platform.
5 – In successful games most of the life time value is created after day 30
If you build a life time value spreadsheet and play around with the numbers you will soon see that typically the first 30 days contribute between 25% to 50% of the total life time value. Plugging in the known ratios of 40%,20%,10% for d1, d7 and d30 retention shows that the yield in days 31 to 180 is twice as much as your first 30 days. This means that you should invest time in giving your most loyal users reasons to play for a really long time. King has mastered that art well and Candy Crush has 1,880 levels in the game. I’m sure they are working on some new ones as we speak.
If your game uses ads and you want to track the LTV per cohort, segment and testing groups, you need a traceback platform. Check out SOOMLA Traceback – Ad LTV as a Service.
This post is about the mistakes that mobile app publishers are making when measuring their ad based monetization. Whether your company is using general purpose analytics, attribution, the mediation dashboard or in-house BI to track your revenue from advertising you are probably making at least one of these mistakes.
1 – Week by Week Testing instead of A/B testing
From what I have seen so far this one is a fail for 100% of the mobile app publishers I have talked with. Lets say you want to test a new feature that increases the number of allowed rewarded videos from 3 to 5. There is a right way and wrong way to do it. A/B split is pretty easy to implement on Google play due to their controlled roll-out feature and on iOS it’s not that hard either. However, when it comes to ad revenue companies use week by week testing. In other words they implement something and compare the ad-revenue of this week vs. last week. Here are a few reasons why this is wrong:
- There could be campaign changes between week 1 and week 2 – campaigns go up and down on the ad-network side all the time if week 2 was better due to a big campaign you might think it’s because the changes you made. A/B tests eliminate that
- Your user behavior and usage volume might be impacted by real world events like a holiday weekend or a big sporting event – with A/B tests the events impact both groups so it’s a fair test
- With week by week testing you have to go “all-in” and you don’t even know if the revenue change came from the group who received the change
- It’s almost impossible to reach statistical significance with week by week testing
The reason why companies don’t implement A/B testing for ad-revenue is that doing so without a specialized ad revenue tracking solution is very complex. However, optimizing with week by week testing is very limited.
2 – Assuming all users are worth the same
Most mobile app publishers assign very specific value to each user when it comes to IAP revenue but fail to do the same for ad revenue. The typical approach is to assume all users are worth the same amount of revenue. This is in-fact very far from reality. First of all, not all users even see ads when it-comes to rewarded videos and even if you look at the group that does see ads there are users that install a few apps and are worth more than $10 while others who only watch the videos end up not generating any revenue.
3 – Not measuring your eCPM decay
“The 1st impression of a user is worth the same amount of money as the 10th impression” – FALSE. The performance of the 1st impression is higher and so the CPM that advertisers are paying in RTB are higher and the eCPMs you are getting from the rewarded video network is also higher for the first impression from the very same reason. As the same users sees more and more impressions in the same day he becomes blind to the ads and the CPM decays. Assuming that all the impressions are worth the same amount of money is a common mistake by mobile app companies.
4 – Focusing on impressions rather than Opt-in ratio
Rewarded video became one of the biggest sources of advertising revenue for mobile app companies. However, it’s important to understand that this is an opt-in type of interaction. With some games, only 10% of the users choose to see the ads while in others it can be as high as 70%. Since the 1st impression pays a lot more than the subsequent impressions, focusing on increasing the number of impressions is a mistake. Companies should focus on increasing the Opt-in ratio instead
5 – Not tracking churn by campaign creative
The last mistake is related to the relationship between ads and churn. There are 2 type of ad interaction that can cause your users to churn:
- Ads that have a negative experience – are deceptive or have low quality creative
- Ads of competing apps might steer your users away from your app
Not tracking the impact of different ad creatives placed by the ad-networks in your app could be dangerous.
If you want to improve the way you are measuring your ad revenues and stop making these 5 mistakes – check out SOOMLA Traceback – Ad revenue tracking platform.
If you want to know your LTV and is using a free analytics platform you might find our online LTV calculators interesting. You can also see our guides for Calculating LTV with Flurry and with Google Analytics.
However, another approach is to upgrade to a paid analytics platforms that offers LTV reporting out of the box. Unfortunately, I couldn’t find this feature in any of the free analytics platforms so I guess the only way is to pay the premium. Below you can find 7 tools that offer this option and the following details about each one:
- Depth of LTV reporting they offer:
- Historic LTV – this is a report that summarizes the amount of revenue per install. If you wait 180 days
- LTV prediction report – this is an algorithmic calculation that predicts the LTV early on in the user lifetime based on a formula such as this one
- Guide for LTV prediction – Some providers offer a resource for using their reports to calculate LTV
- Platform and engine support – Mobile operating system as well as app building tools and game engines
- Popularity – based on number of apps that use the platform
- Price for 1M MAU based on the pricing presented on the provider
|Vendor||LTV Reporting||Platforms and Engines||Popularity||Price (1M MAU)|
|Historic LTV, Guides for Prediction||iOS, Android, Windows, Unity, PhoneGap||Mid||Undisclosed|
|Historic LTV||iOS, Android, Unity||High||$1,800|
|Historic LTV, LTV Prediction Report||iOS, Android, Unity, GameMaker||Low||$15,000|
|Historic LTV, LTV Forecast Report||iOS, Android, Windows, Unity, UE4, Adobe, PhoneGap||Low||$2,500|
|Historic LTV||iOS, Android, Windows, Unity, PhoneGap||High||Undisclosed|
|Historic LTV, LTV Prediction Report||iOS, Android||Mid||$5,000|
|Historic and Predictive LTV||iOS, Android, Unity||Low||Undisclosed|
Honorable mention goes to Upsight. The company is offering a very flexible solution and are trusted by some of the industry leaders. Before they merged and rebranded their Kontagent platform did have LTV prediction and while the current platform don’t support this feature I’m sure it will be added back in the future.
If you want to also analyze and predict the LTV for your advertising revenue – now there is a solution. Check out SOOMLA Traceback – Ad LTV as a Service.
In-app ads are getting more and more popular these days. The increase in CPI levels alongside the penetration of brands have made it possible to build a successful ad-support app company and many are doing so. Given this trend, it’s becoming increasingly important to understand who is advertising in your app but that’s not an easy task today. Most likely, an app developer will be using ad-networks to place ads in his apps and so this setup doesn’t allow the him to get reports on the identity of the advertisers.
Verify that you are not helping your direct competitors
One reason to track the ads in your app and reveal the advertisers’ identity is to make sure you are not helping your direct competitors. You might be able to manually check this by opening your app and using it yourself until ads are shown but keep in mind that the ads you see in one country are different than the ads shown in another country and sometimes campaigns even change by time of day. To do this right you need a 24/7 operation in 249 countries which is quite impossible to do manually. Luckily, there are several tools to do this. On mobile web, Adclarity and GeoEdge are providing this service and for Mobile Apps you can check SOOMLA Traceback.
Monitor ad integrity
Another reason to know what ads are running is to enforce policies you may have in-place with regards to ads been shown. In ad-supported apps, ads are part of the overall experience and ads that are in-appropriate would damage your brand, lead to bad reviews and hurt your retention. Here are some of the ads you want to weed out:
- Inappropriate ads such as ads with nudity
- Ads with deceiving UX and false promises
- Offensive ads
Understand what’s driving the eCPM you are getting
Knowing what campaigns are been run by ad-networks allows you to improve your monetization strategy and up your ad-operations game. Lets think about a situation where you are using Vungle and Unity Ads and Vungle’s eCPMs have been higher in the last few days and they are on top of the waterfall. Tomorrow, their biggest campaign might end and the eCPMs would drop. If you would wait for the ad-mediation to pick up on this change you might get 2 days of low eCPMs until the waterfall configuration is changed by the mediation auto-pilot. Knowing about a campaign that just ended would allow you to respond more quickly.
Get ideas for direct deals you can make
You might have heard that direct deals with advertisers can bring higher monetization levels by cutting out the middle man. Knowing who is currently advertising in your app can give you tips about the best advertisers for your app and can give you the information that the advertiser would ask about.
If you are using ad networks to monetize your app you should check out SOOMLA TRACEBACK. In addition to advertising revenue attribution you can also get information about the campaigns running in your app.