How does OpenWrap A/B testing work?

An A/B test is a split test, allowing you to compare two versions of something to see which one performs better. The OpenWrap A/B test feature allows you to test two profile configurations and then compare the results. You can use the test data to optimize the profile settings.  

Each bid request is randomly assigned to the control group or test group. Random assignment is essential for valid results because it eliminates the possibility of sampling bias.

  • Group A = This is the control group. Results for this traffic are based on the profile version as is.
  • Group B = This is the test group. Results for this traffic are based on the modified profile. 

OpenWrap supports three types of A/B tests:

Test typeWebAMPIn-appOTT/CTV
Auction timeout✔️✔️✔️✔️
Bidder✔️✔️✔️✔️
Identity Providers✔️


Client-side vs Server-side
(coming soon)
✔️


Configuring the A/B tests

How large should the test group be?

Best practice is to set the test group percentage high enough to get at least 100,000 paid impression in the test group. Fewer than 100,000 paid impressions can result in a higher sampling error.

Auction Timeout

Measures the effect of bidder timeouts on monetization. This test allows you to set a different auction timeout on a specified percentage of traffic to see the effect on revenue and latency. 

  • Short timeouts result in fast page load and slightly improves viewability.
  • Longer timeout is better for monetization.

Configure the test:

  1. Create an OpenWrap profile as you normally would. 
  2. Enable A/B Testing then select the Test Group Size and Auction Timeout.
  3. Enter a Test Auction Timeout that's different from the Control Auction Timeout.
  4. To view the results, see Access the results.

  • Shorter auction timeouts help pages render faster; however, some bidders may timeout and not be included in the auction.
  • Longer timeouts will increase the time it take for the page to completely render but might improve monetization.

Partners

This test allows you to measure the overall effect of adding or removing bidding partners. 

  • Adding a bidder will usually show some revenue on that bidder, but how much of it is true incremental revenue and how much is just a shift from other bidders?
  • The same is true for removing a bidder.

Configure the test:

  1. Create an OpenWrap profile as you normally would. 
  2. Enable A/B Testing then select the Test Group Size and Partners.
  3. Select the Test Partners for which you want to compare performance against the Control Partners.
  4. Set the test partners' Traffic Allocation and/or Bid Adjustment.
  5. To view the results, see Access the results.
  • Bid adjustment is generally used to convert gross bids to net or to adjust for discrepancies. 

  • Traffic Allocation for test partners should generally be set to 100% since the test group size should be used to adjust how much traffic test partners see.

  • See OpenWrap Profile Management to learn more about traffic allocation and bid adjustment.

Identity Partners

Measures the effect of adding or removing Identity Providers on monetization. This test allows you to quantify:

  • How much incremental revenue is obtained from adding a particular ID provider.
  • How much revenue is lost when removing an id provider.

When looking at results, keep in mind that gains are much larger for cookieless traffic (Safari and Firefox). Chrome traffic shows a smaller effect, so the percentage gain is larger on the uncookiable share of traffic. Reporting doesn’t currently break this out.

Configure the test:

  1. Create an OpenWrap profile as you normally would.
  2. Enable Identity Partners and select your Identity Partners.
  3. Enable A/B Testing then select the Test Group Size and Identity Provider.
  4. You'll notice the Identity Providers you selected for this OpenWrap Profile have become the Control Providers.
  5. Select the Test Providers (at least one must be different from the Control Providers).
  6. To view the results, see Access the results .

Adding partner to the control group will show the publisher the incremental monetization with the partner. Removing a partner will show how much monetization is lost with out that partner.

Access the results

How long should I let the A/B test run?

Best practice is one week. This will avoid biased results that might occur from day-of-the-week seasonality.

To access the results, go to the Profile Details page and click the Results link. 



Sample results page:

Custom A/B tests

You can use code on the page to run custom tests.

  • Set PWT.testGroupId to a number between 0 and 15
    • 0 is the control group
    • 1-15 are test groups

The results won’t show up in the A/B test enabled in the version list, so you'll need to use Report Builder to get the results.

Frequently asked questions

How long should an A/B Test run for?

One week. This will avoid biased results that might occur from day of the week seasonality.

What sampling percentage should I choose?

Best practice is to set the percentage high enough to get at least 100,000 paid impression in the test group. Fewer than 100,000 paid impressions can result in a higher sampling error.

Can I run more than one test at time?

Not at this time. This feature currently supports one test per profile per profile version.

Can I test multiple combinations?

No. This feature doesn't support multi-variate testing.