5 Ways to A/B Test Your Mobile Ads – Tried and True Methods

5 Ways to A/B Test Your Mobile Ads

Having a great mobile app and incorporating ads into it is not the end of the story. Now that you have gone the whole nine yards, you need to stay on top of your ad performance by constantly analyzing, testing, and optimizing for best results. And by results, we do not only mean numbers but the overall user experience. One of the best methods to do this is A/B testing – changing one thing in an ad and observing the results. Also called split testing, it is used to discover which elements and features work best for your target audience.

Since ad partners don’t usually report the ad revenue per individual user, A/B testing ads for app monetization requires that you divide users into two groups and provide a different experience for each group. Then you measure the performance of both so you can compare them afterward with tools such as Google Plays Staged Rollouts.

Let’s take a look at five ways in which you can conduct A/B testing of your mobile ads.

1. Interval testing

The most utilized method of A/B testing, interval testing refers to app publisher having one version of the app already published and rolling out a version with the new feature to all the devices as a force update. The results for two different time intervals are then compared. For instance, week 1 would contain version 1, and week 2 version 2, so the publisher can compare version 1 and version 2 using the results in different date ranges.

Strengths: This method is rather simple to implement.

Weaknesses: Implementing force updates can negatively affect user retention. It can also be subject to inaccuracy and seasonality.

2. Placements (zones, areas)

The concept of placements, zones or areas is often planned by ad providers so you can identify different areas in the app where ads are shown for monitoring and optimization purposes. This concept can be used in A/B testing, where a Zone A and Zone B can be created. Then Zone B is observed for users that were exposed to the new feature while simultaneously monitoring Zone A for the control group. If you’re using multiple ad networks, it is recommended that you repeat this process for each of them, and aggregate the results after the test period to conclude the A/B testing. You can also create a new app in your ad network configuration screen, which means you will have two app keys and implement one app key in group A and the other in group B.

Strengths: A bit more accurate than other methods.

Weaknesses: Requires more engineering effort and is not ideal for establishing a culture of testing and being data-driven.

Q1 2019 MOBILE USER CHURN

3. Counting impressions

Every time an impression is served, the publisher reports an event to its own servers. Furthermore, the publisher can create a daily routine that queries the reporting API of each ad network and extracts the eCPM per country. Next, this information is consolidated in the publisher database and for every user, the impression count for every ad network is multiplied by the daily average eCPM of the said ad network in a selected country. In this process, we get an estimation of the ad revenue of that user during one day. Once this system is in place, it is possible to implement A/B testing, dividing the users into groups and calculating the average revenue per user in each group.

Strengths: No engineering expertise is needed for testing, at least not after the initial setup.

Weaknesses: The initial setting up of the system does require a considerable engineering effort. Moreover, the results are somewhat inaccurate as this method uses average eCPM, whereas the eCPM variance is substantial.

4. Leveraging true eCPM

Speaking of eCPM, a more accurate method is using multiple data sources to triangulate the eCPM of every single impression. As you might have guessed, this demands a mammoth engineering effort for which you might want to employ a third-party tool such as SOOMLA. Once the data is integrated into the company database, publishers can carry out A/B testing and get the results directly to their own BI or view them through the third-party tool’s dashboard. This simplifies A/B testing and allows for the establishment of a testing and optimization culture.

Strengths: This is the most accurate way of A/B testing which can also result in millions of dollars in revenue improvement.

Weaknesses: Using a third-party tool can be expensive but it’s usually worth it as ROI is evident rather quickly.

5. Testing Creatives

Creative elements you can A/B test include images, colors, styling, value propositions, text, CTAs, logos, video length, landing pages, and so on. You should try playing around with these elements to see what changes to which elements make your ads stand out or differ from the rest of the interface or other advertisements.

Strengths: Creatives can be easy to test.

Weaknesses: Sometimes it can be easy to fall into a trap of treating multiple different creatives as one element. This can bring inconclusive or misleading results.

TRIPLEDOT - BID DISTRIBUTION

Keep calm and continue A/B testing

A/B testing and optimization is a never-ending process. Once you’ve improved your app, there will always be room for more improvement as conditions tend to change and technologies develop, especially so in the field of ad-based app monetization. It is of adamant importance that you keep that in mind and provide a valiant effort to keep up to date with trends, changing conditions and users’ experience by A/B testing everything.

LEAVE A REPLY

Please enter your comment!
Please enter your name here