IzeAdsHelp

A/B Testing

How to set up and analyze A/B tests

A/B Testing

The IzeAds A/B Testing feature lets you compare different versions of offer pages to discover which one converts better. Traffic is automatically split between variants, and results are analyzed with statistical significance using the chi-squared test.

Creating an A/B test

1

Go to the A/B Tests page

In the side menu, click "A/B Tests". You will see the list of existing tests and their statuses.

2

Create a new test

Click "New Test". Set a descriptive name for the test, such as "Original LP vs New LP".

3

Select the tracker

Choose the tracker that will be used for the test. Traffic from this tracker will be split between the variants.

4

Define the variants

Add at least two variants. For each one, set a name (e.g., "Control", "Variant B"), the page URL, and the traffic percentage (traffic split).

5

Configure the traffic split

Distribute traffic between the variants. For a two-variant test, the default is 50/50. The total must equal 100%.

6

Start the test

Activate the test. From this point on, visitors will be randomly distributed among the variants according to the configured split.

Understanding the results

IzeAds automatically calculates statistical significance using the chi-squared test. On the results page, you will find:

  • Visitors per variant: How many users saw each version
  • Conversions: Number of sales or leads for each variant
  • Conversion rate: Percentage of visitors who converted
  • Statistical significance: Indicates whether the difference between variants is statistically significant (p-value below 0.05) or could be attributed to chance
  • Winning variant: When significance is reached, IzeAds indicates which variant performed best

Sample size

Do not make decisions based on premature results. The chi-squared test requires a minimum volume of visitors and conversions to be reliable. Wait until the statistical significance indicator reaches at least 95% before declaring a winner. Tests ended too early may lead to wrong conclusions.

Best practices

Test one variable at a time (title, image, price, CTA). If you change too many elements simultaneously, you won't know which change caused the difference in results. Keep the test running for at least 7 days to capture behavior variations throughout the week.