cancel
Showing results for 
Search instead for 
Did you mean: 
CommunityMod_WM
Community Manager
Community Manager

This article provides recommendations to help Enhanced Content subscribers be successful in their A/B testing endeavors, including guidelines for how to prepare to build an experiment even before entering the Syndigo platform and how to improve the likelihood that experiments will result in statistically significant findings.

The best practices covered in this article may be summarized as follows:

  • Preparation involves identifying the goals of the program, developing a detailed plan to achieve the goals, and compiling the assets required.
  • Select mid-tier or second tier performing products to test, focusing on those with minimal external factors to influence shopper behavior.
  • Determine the configuration of the experiment based on product traffic data – Generally, at least 3,000 unique, net-new shoppers must be qualified into the A/B test to reach statistical significance.

 

Preparation is Key

These are the Syndigo-recommended preparation steps for companies preparing to build an Enhanced Content A/B test:

 

Step 1:

Identify the goal(s) the organization is seeking to achieve through Enhanced Content A/B testing and the stakeholders involved. Confirm that the current release of Syndigo contains the features and data needed to help achieve these goals.

  1. The “Add a widget” test type enables you to attribute value to a specific creative asset.
  2. The full custom test type allows you to choose between two very different versions of content if you don’t want value attribution.

One of the primary goals of all A/B tests is to make the experiment a success. An experiment can be considered successful in either of the following cases, depending on your goals:

  • If your goal is to attribute value to specific creative assets and/or content layouts, then your A/B test should result in the algorithm calculating a high confidence level and providing you with a declaration that one content version performs better than the other.
  • If your goal is to uncover whether there is any notable difference in performance between different assets and/or layouts, then you will look for either a declaration that one content version is better or that the two versions do not produce different results.

A meaningful story about the performance or influence of enhanced content is revealed by either outcome when the experiment is statistically significant!

 

Step 2:

Develop A/B testing plans as far in advance as possible.

An A/B testing plan should contain the following information:

  1. The names of the individuals or teams within the organization who will be involved in any aspect of the A/B test and/or the experiment results, with a special focus on identifying the users that will be building the experiments in Syndigo and the decision makers in the organization
  2. The clear and concise question that is going to be answered by the experiment
  3. Background information that helps provide context to the goal and question
  4. The experiment hypothesis – “We believe that conversion rates will improve if we add a Feature Set widget with the compelling images and marketing copy from our most recent campaign.”
  5. Product selection – Reference the following section, What makes a product “ideal” for A/B Testing?
  6. Specification of the experiment variable – the single factor that will differ between Content A and Content B that will help prove the hypothesis – “A Feature Set widget added to Content B that includes four images and corresponding marketing copy.”
  7. The test settings:
    • What Enhanced Content will serve as the Content A (and note whether that Enhanced Content needs to be created)
    • What experiment type enables testing the hypothesis specified
    • Which retailer websites and product page URLs the experiment should be published to
    • When the experiment should begin and end
    • Whether a winning content should be published immediately at the conclusion of the test or if the system should revert to Content A on all sites
  8. The actions that the organization will take based on the potential results –
    • If A is better than B
    • If B is better than A
    • If there is no difference between A and B
    • If the experiment is not statistically significant
  9. A communications plan for the organization stakeholders

 

Step 3:

An A/B testing plan should include the following deliverables:

  • The assets and layouts for Content A, and
  • The assets and layouts for the variable in Content B.

 

What Makes a Product “ideal” for A/B Testing?

Enhanced content performance is most efficiently measured across two cohorts of visitors:

  1. Those who are in the exploratory phase of their shopping journey, uncertain of whether a specific product meets their needs
  2. Shoppers who are ready to purchase, but are in the process of deciding between competing brands

When the primary type of shopper driven to visit your product pages falls into one of the two above cohorts, the likelihood of being able to effectively measure the impact of enhanced content greatly increases.

There is a third cohort of shoppers that may be observed visiting your product pages on retailer websites: Those who are visiting the product page solely with the intent to purchase, having made the decision to add to cart due to influences outside of the rich media and information provided (often in advance of their visit). Effective marketing campaigns, sales, promotions, historical loyalty and other external factors contribute to the percentage of visitors who fall into this third cohort.

Ready-to-buy visitors muddle the measurements of enhanced content’s impact on conversion rate. While this type of shopper contributes greatly to a product’s overall success, they also reduce the statistical significance of enhanced content experiments. For this very reason, lower conversion rate lift and incremental revenue KPIs are observed with products backed by viral social media marketing campaigns as well as those that already possess a significant market share.

As a result, Syndigo recommends selecting a product for A/B testing that may be considered a mid-tier or second-tier performer: A product for which there is some opportunity to acquire a greater share of the market, but for which sales have been relatively stable for some length of time.

Avoid:

  • New product launches, as they are often accompanied by a general sense of excitement and numerous omnichannel marketing campaigns
  • Products that are projected to “take off” in sales due to external influences during the time period that the A/B test will remain active
  • Products that are already established as the leaders in their categories, with minimal competition from other brands

 

Improve the Outcomes of Your Experiment Through Its Configuration

An A/B test may be considered unsuccessful when the algorithm calculates low confidence that there is a winner. This indicates that not enough data was collected during the course of the experiment.

How can you improve the likelihood that your experiment will collect enough data to declare statistical significance?

Tip #1: Configure the experiment to collect 3,000 or more impressions across all the product page URLs where the A/B test is published.

This is achieved by analyzing the data that is available today. Review the Enhanced Conversion – Products or Widget Insights reports in Report Center, filtering down to the specific product or a similar product for which there is currently Enhanced Content published. Make note of how many visits each product page URL collects over different timeframes – 30 days, 90 days, and beyond.

By setting a baseline such as “My product collects 3,000 impressions over 45 days across these 6 retailer websites”, you then have the guidance you need to configure the A/B test to collect the volume of insights required to declare an outcome with a high level of confidence.

Tip #2: Design the experiment so that the difference between the content versions is substantial enough to be immediately observed by visitors. Utilize the most desirable real estate – above-the-fold or hero content and the first widget in the layout - and more eye-catching and engaging creative assets such as video, 360s, or interactive tours. Zoom out of the full preview of both contents until you can set them side-by-side on your screen – Is there an immediately recognizable difference between them? Or does it take more than 10 seconds to notice the difference? The most worthwhile use of A/B testing is to design the content versions to promote the highest likelihood that visitors will notice the specific element that is being tested.

Version history
Revision #:
2 of 2
Last update:
4 weeks ago
Updated by:
 
Contributors