Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.
Content Layer
Content Column
Content Block

How does OpenWrap A/B testing work?

An A/B test is a split test, allowing you to compare two versions of something to see which one performs better. The OpenWrap A/B test feature allows you to test two profile configurations and then compare the results. You can use the test data to optimize the profile settings.  

Each bid request is randomly assigned to the control group or test group. Random assignment is essential for valid results because it eliminates the possibility of sampling bias.

  • Group A = This is the control group. Results for this traffic are based on the profile version as is.
  • Group B = This is the test group. Results for this traffic are based on the modified profile. 

Types of A/B testing

OpenWrap supports three types of A/B tests:

Test typeWebAMPIn-appOTT/CTV
Auction timeout✔️✔️✔️✔️
Identity Providers✔️

Client-side vs Server-side
(coming soon)

Auction Timeout

Measures the effect of bidder timeouts on monetization. This test allows you to set a different auction timeout on a specified percentage of traffic to see the effect on revenue and latency. 

  • Short timeouts result in fast page load and slightly improves viewability.
  • Longer timeout is better for monetization.


Measures the effect of adding or removing bidders on monetization. This test allows you to quantify the overall effect of adding or removing bidding partners. 

  • Adding a bidder will usually show some revenue on that bidder, but how much of it is true incremental revenue and how much is just a shift from other bidders?
  • The same is true for removing a bidder.

Identity Partners

Measures the effect of adding or removing Identity Providers on monetization. This test allows you to quantify:

  • How much incremental revenue is obtained from adding a particular ID provider.
  • How much revenue is lost when removing an id provider.

When looking at results, keep in mind that gains are much larger for cookieless traffic (Safari and Firefox). Chrome traffic shows a smaller effect, so the percentage gain is larger on the uncookiable share of traffic. Reporting doesn’t currently break this out.

Set up an A/B test

Only one A/B test can be performed at at time. If you want to perform more than one test, create a new profile version for each one. 

  1. Create an OpenWrap profile as you normally would. 
  2. Enable A/B Testing. You can edit the profile version to disable the test at anytime. 
  3. Select the Test Group Size and Test Type.
  4. Enter the test group criteria. The control and test configurations must be different otherwise the system will not let you save the profile.
  5. Access the results in the Profile Details page. 

    Sample results page:

Custom A/B tests

You can use code on the page to run custom tests.

  • Set PWT.testGroupId to a number between 0 and 15
    • 0 is the control group
    • 1-15 are test groups

The results won’t show up in the A/B test enabled in the version list, so you'll need to use Report Builder to get the results.

Frequently asked questions

How long should an A/B Test for?

One week. This will avoid biased results that might occur from day of the week seasonality.

What sampling percentage should I choose?

Best practice is to set the percentage high enough to get at least 100,000 paid impression in the test group. Fewer than 100,000 paid impressions can result in a higher sampling error

Can I run more than one test at time?

Not at this time. This feature currently supports one test per profile per profile version.

Can I test multiple combinations?

No. This feature doesn't support multi-variate testing.