Skip to content

A/B & Multivariate Testing

A/B and multivariate testing let you test search ranking, recommendations, facets, UI modules, and zero-results strategies to validate changes and measure their impact.

Overview

A/B testing compares two versions (control vs. variant), while multivariate testing tests multiple variables simultaneously to find optimal combinations. Both help you validate changes before full deployment and measure their true impact.

How to Create an A/B Test

Step 1: Navigate to Experimentation

  1. Go to MerchandisingExperimentation & Measurement
  2. Click Create New Test

Step 2: Configure Test Type

  1. Test Type: Select A/B Test or Multivariate Test
  2. Test Name: Enter a descriptive name (e.g., "Ranking Strategy v2", "New Facet Layout")

Step 3: Define Variants

For A/B tests: 1. Control Variant: - Name: "Control" (or your baseline name) - Traffic Percentage: Set the percentage of traffic (e.g., 50%) - Configuration: Select your baseline configuration 2. Variant: - Name: Enter a descriptive name for your test variant - Traffic Percentage: Set the remaining percentage (e.g., 50%) - Configuration: Select or create the configuration you want to test

For Multivariate tests: 1. Variables: Select the variables you want to test (e.g., facet layout, facet count, facet order) 2. Variable Values: Define the values for each variable 3. Traffic Percentage: Set what percentage of traffic should see the test

Step 4: Select What to Test

Choose what you're testing: - Ranking Strategy: Test different ranking configurations - Recommendations: Test different recommendation algorithms - Facets: Test facet layouts, counts, or ordering - UI Modules: Test interface changes - Zero-Results Strategies: Test different approaches to handling zero results

Step 5: Set Metrics

Select the metrics you want to track: - Conversion rate: Percentage of searches that result in purchases - Revenue per user: Average revenue per user - Click-through rate: Percentage of results that get clicked - Add-to-cart rate: Percentage of results added to cart - Time to purchase: Average time from search to purchase

Step 6: Configure Test Duration

  1. Duration: Set how long the test should run (e.g., 14 days)
  2. Statistical Significance: Set confidence level (typically 95%)
  3. Minimum Sample Size: Set minimum sample size required for valid results

Step 7: Set Guardrails (Optional)

Define metrics that should trigger alerts or stop the test: - Zero results rate: Alert if above threshold - Revenue per user: Alert if drops below threshold - Error rate: Auto-stop if errors exceed threshold

Step 8: Save and Activate

  1. Click Save Test
  2. Review the test configuration
  3. Click Activate to start the test

Monitoring Tests

Track test performance:

  1. Go to ExperimentationActive Tests
  2. View real-time metrics:
  3. Conversion rates for each variant
  4. Revenue per user
  5. Statistical significance
  6. Sample sizes
  7. Review guardrail alerts if any are triggered

Best Practices

  1. Define clear hypotheses: Know what you're testing and why
  2. Set adequate sample sizes: Ensure you have enough traffic for statistical significance
  3. Run long enough: Don't stop tests too early—allow time for patterns to emerge
  4. Monitor guardrails: Watch for negative impacts on key metrics
  5. Test one thing: Avoid testing multiple changes simultaneously
  6. Document results: Keep records of test outcomes for future reference