Not too long ago we released a new feature that we are super excited about. We believe it can be valuable for any app publisher showing ads in its apps and help him improve monetization and eCPM rates if used correctly.
What is Micro Benchmarking
We all like to benchmarks ourselves. Specifically when it comes to apps people usually ask themselves:
- Since the beginning of the Q – how is my overall eCPM compared to similar apps
- Since our last version released – how is the ads engagement/adoption rates of my app compared to similar apps
- For all users in Brazil, is my banner eCPM comparable with other apps
The one thing that is common to all of these is that they are very high level. If you have been using AdIntel, you already know that it’s the best source of data for these questions. That is not the topic of this post however and that is also not what Micro Benchmarking is.
Benchmarking with similar apps is nearly impossible and actually less effective than Micro-BenchmarkingYaniv Nizan, CEO at SOOMLA
Micro benchmarking is different in 2 main ways:
- It’s granular to a specific combination of:
- It’s focused on the % change or diff from one day to the next
So while the high level benchmarks usually serve as a motivation tool or a leveraging tool to get more resources the Micro Benchmarking has a different purpose. Let’s see what you can do with it.
Networks Under Pressure
A while ago a CMO of a big publisher told me they tried to buy inventory on their own apps to figure out what percentage of the advertiser ad budget actually ends up in the publisher hands. The CMO wasn’t able to get that data. However, we have enough data to say that in over 90% of the cases where one game app is advertising in another game app if the advertiser pays $20 eCPM, the publisher will get between $15 to $10 eCPM.
So if you are a publisher and you are getting $10 CPM. Imagine what an improvement to $15 CPM will do for your business.
The first thing that is important to understand is that it is possible to impact your eCPM by applying pressure on your monetization partners. Some have tried it and had sporadic success but it’s rare to see consistent success. Reports of successful cases have had this in common:
- The app/account was big enough to matter for the network to care – a good indication of that is if you have an account manager
- The partner had something to gain/lose so the account manager could get internal leverage for special terms or dedicated optimization resources
- The app publisher had access to benchmarks and was able to convince the account manager that the benchmarks are applicable
The combination of these 3 creates a scenario in which the account manager has the leverage to help you.
So #1 should be true for any customer of SOOMLA. #2 was done in the past when companies were negotiating 1st look privileges. One way to achieve this now is to create an arbitrary limit to how many networks SDKs you can have and explain that you are going to take some out. #3 is the reason we created Micro Benchmarking.
When you see a drop in revenue do this
So every now and then you will have a drop in the ad revenue of your app. This should trigger an investigation to isolate the problem to single combination of:
For example, the drop might be isolated to US, Android, UnityAds, Interstitial.
Now that you have isolated the problem the next step is to switch the Benchmarking feature on. You should compare the drop to the benchmark drop across these 3 metrics:
Let’s call out the following two scenarios:
- Unique drop – situation where a drop is isolated to your app
- Common drop – situation where a drop happens to many other developers
So common drops are less interesting and unique drops is what we are after. In the 3 charts above you can see that the Unique drop in ad revenue resulted from a unique drop in impressions. If you are using price floors in your waterfall and that configuration didn’t change – the drop in impressions is likely to be something on the network side. If it’s a unique drop it could be something in the configuration of your account. Maybe the account manager changed a parameter controlling the rev-share / risk -margins. Maybe there was a big advertiser that was targeting your app and you are now blocked. Either way, you should make the network accountable for that and apply pressure by:
- Making sure the network understands that you know the drop is unique to you
- Make sure the network has something to lose – even removal of the SDK
The expected result is that the network will change something back to improve your revenue again.
Not only when you see a drop
What if you could improve the revenue even when there is no drop? That will be awesome right? That is actually not very different from the process described above. All you have to do is apply constant pressure. If you look every day in your app with “benchmarking mode” turned on and scan various combinations of – country+platform+network+adtype you will see that almost on a daily basis even if there is no drop on your total revenue one of these situations happen:
- The benchmark had an increase in one of these combinations while your app didn’t have an increase
- Your app had a unique drop in one combination while another combination increased in revenue so you don’t see a drop in the overall revenue.
Each one of these situations can trigger the same process described above and doing this repeatedly will result in small increments in eCPM that will accumulate to big gains.
Applicability is key
For the network to take action, they need believe the benchmark is applicable to the situation. This is the reason why Micro-Benchmarking is much better than regular benchmarks. With regular benchmarks the network can easily claim that the benchmarks is not applicable with excuses that genres are different and each app gets unique eCPMs based on various algorithms. With Micro Benchmarking it’s a different situation – here is why:
- Micro-Benchmarking focuses on the drop ratio (the diff) so it is normalized by nature. Algorithms don’t change overnight so big drops are a result of a human action.
- It’s true that eCPMs may be different between different apps but when talking about a drop in revenue you are comparing one day to another day with the same app – so if the revenue or eCPM was high on one day the only explanation for it to drop the next day is human intervention.
This table may clarify the difference
|BENCHMARK w/ SIMILAR APPS||MICRO-BENCHMARKING|
|Focus on ratio metrics: eCPM, ARPDAU||Focus on daily change/diff/drop in Revenue, Impressions or eCPM|
|Almost impossible to get||Easy with SOOMLA Dashboard|
|Not very applicable as the network can always claim that your definition of ‘similar’ is not applicable||Highly applicable since algorithms don’t change eCPMs or fill rates sharply. Drops or Big changes are typically caused by human action.|
|Not effective for increasing eCPM and revenue||Effective for increasing revenue and eCPM by keeping networks honest as explained in this post|
Still not convinced? That’s ok – this topic is quite complex. Consider the following example:
- Your eCPM for US-RV-iOS is $8 and that is below benchmark – if you call the network and say “you should pay me more” they will tell you that your app is unique and so the fact that other apps gets $20 is irrelevant.
- Your app is a match-3 app and you work hard to get a benchmarks of similar apps – you discover that other match-3 apps get on average $12. You call the network again and tell them “you should really pay me more” but they reply by explaining that the benchmark still doesn’t apply since the other apps are bigger/smaller, older/younger, prettier/uglier or that their users are wealthier etc.
- You start using Micro-Benchmarking by SOOMLA. One day you realize that your eCPM dropped from $8 to $6 over night. That’s 25% drop. You check with your team mates and make sure there was nothing on your side. You also check DAU and other metrics and realize that the change in eCPM is not correlated with other big KPI changes on your end. You turn on the Micro-Benchmarking feature on SOOMLA dashboard and realize that out of all the apps that SOOMLA monitors the average change that day was -1% while you had -25% and that puts you in the 98% percentile with only 2% of the apps losing more eCPM overnight. This time you call the network and say “you changed something and destroyed my eCPM“. Your eCPM was already low so why it became lower:
- Your app didn’t become bigger/smaller
- Your app didn’t become older/younger (well older by 1 day)
- Your app didn’t become prettier/uglier
- Your users didn’t become poorer/wealthier
- Now the networks don’t have anything to say – this has to be something on their end.
The app that is most similar to your app is your app yesterday. It’s the same app! So when focusing on drops you are using the most applicable benchmark.Yaniv Nizan, CEO at SOOMLA
No need to normalize twice
It might seem at first that benchmarking revenue and impression counts may not be as applicable. That, however, is not true. Since Micro -Benchmarking deals with drops it is already normalized. Dividing revenue and impressions by DAU would be an alternative way to normalize but then again it would be subject to the same problems mentioned above regarding app genres and so applicability will be lower if you normalize with DAU and actually drops is a better way to normalize.
Think about the following examples:
Example 1 – 25% drop in revenue
Your app had a drop in revenue of 25% for a single country/network/platform/adtype. This could be a result of a drop in arpdau or a drop in dau:
- Drop in dau – you should focus on your app and not look for benchmarks
- Drop in arpdau – here a drop in revenue of 25% should translate to a drop of 25% in arpdau so there is not difference and you can use the revenue benchmark
Example 2 – no drop in benchmark
Your app had a drop in revenue of 25% for a single country/network/platform/adtype and your DAU didn’t change much. You are comparing to the Micro-Benchmark and realize the average drop was 0% so the drop is isolated to you. This theoretically can be explained in 2 ways:
- Other apps didn’t have any drop in revenue or arpdau and you should focus your energy in asking for explanations from your monetization partners
- All the apps had a drop in arpdau but at the same time all the other apps had a 25% increase in DAU so the revenue benchmark shows as if it didn’t change while only your app had a drop.
The 2nd scenario here is highly unlikely. To have all the apps increase DAU by 25% you need a Superbowl level event. Those events are rare and will normally be making the news headlines. So to have that while your app DAU didn’t move is even more unlikely. So we are left with scenario #1 which means that the revenue benchmark still applies.
We advise customers to use arpdau and impressions/day for most use cases but when it comes to Micro-Benhcmarking the rules are different.Yaniv Nizan, CEO at SOOMLA
Try it yourself
If you are already a customer of SOOMLA you can try this method right away. Feel free to reach out to your customer success manager to walk you through the process. If you are not a customer yet, you can click the banner below to get a demo or sign up for one on our website.
See it in video
If you don’t like reading – we also recorded a video about this. Feel free to lean back as you learn how to improve eCPM.