Articles on: Aybee Test Types

CTR Optimization

Introduction - Setting Up a CTR Project Step-by-Step


This guide walks you through the complete process of setting up, running, and analyzing a CTR project – from the very beginning through launch. It is based on the workflow shown in the demo video.


Purpose of a CTR Test

A CTR test measures which version of your product presentation (e.g. main image, title/claim, price signal) is clicked more often in a realistic Amazon-like environment – against identical competitors in all variants. This allows you to identify the winning version before investing budget into product, ads or listings.



Table of Contents

  1. Prerequisites (Step 0)
  2. 1. Create a New Project
  3. 2. Set Up Scenarios & Variants
  4. 3. Define the question you want to ask customers when they click on a specific product
  5. 4. Form Questions
  6. 5. Audience & Sample Size
  7. 6. Display & Randomization
  8. 7. Configure Questions
  9. 8. Quality Checks (QA)
  10. 9. Recruitment & Launch
  11. 10. Analyze Results
  12. 11. Decision & Rollout
  13. Best-Practice Rules
  14. Common Mistakes & Fixes
  15. Pre-Launch Mini Checklist
  16. Example Naming Convention
  17. Sample Size Guidelines
  18. Reporting Guidelines


Prerequisites (Step 0)

  • Access to your workspace/team and permission to create new projects
  • Assets for the variant(s) you want to test: product image(s), title/claim, optional price signal
  • Competitor list (ASIN/URL or clear keywords) – ideally max. 3–4 competitor products
  • Hypothesis, e.g. “Lifestyle image increases CTR vs. packshot”


💡 Pro tip: Test only one variable per project (image or price or claim) to ensure results are clear and interpretable.



1. Create a New Project

  • Navigate to Projects in the top menu.
  • Click + Add a new project
  • Select the project type CTR Conversion Test


Watch this video to get guided through the project creation:

▶️




2. Setup Scenarios & Variants

  • Click on Add a new product
  • Chose between the different options to add products. → (Einfügen nach Veröffentlichung des andere Artikels)
  • Add your product for tetsing and up to 3 competitor products (total 4 products)
  • Select your product for testing within the settings by designating it as "Your product for testing"

Figure 1: Adding a Test Product manually

  • Create at least one additional scenario (Scenario B) in which you want to test your hypothesis that the changes will generate a higher CTR. Optionally add more scenarios (C, D…).

Figure 2: Adding a Variation


By adding a variation, the information will be copied to the new version from the old one to the new one. You can now change one thing in this scenario, to test if the changes will have an imopact on your CTR. For each scenario, configure your own product card: Upload an image/mockup, or adjust the price signal/title/claim (change only the tested variable)

  • Optional: You can edit further settings when clicking on the small gear icon ⚙️. These settings are explained in detail with information buttons within the platform.


⚠️ Important: Between A and B, only your product variable changes. Competitors stay the same. By default, we show each user a random sorting to eliminate sorting effects and only focus on the changes between the scnearios.


Watch this video to get guided through the Scenario Setup:

▶️



3. Define the question you want to ask customers when they click on a specific product

You can see the standardized question in English. This gets automatically translated into the language of the country you want to test.

If you are adding your own defined question, please enter it in the right language (language of the country you're testing).


Figure 3: Define a Business Question



4. Form Questions

Mandatory purchase question:

“When shopping on Amazon, which product would you purchase?” (single choice on product cards)


Recommended follow-ups:

“What made you click this product?” (multi-select + free text)

“What stopped you from clicking other products?” (multi-select + free text)


Optional: “How confident are you in your choice?” (scale)

These follow-ups reveal the drivers and barriers behind CTR results.


Project name: precise and recognizable (e.g. “Vitamin B12 – Main Image A vs. B – DE”).

Internal description: purpose, hypothesis, tested variable, date/owner.

Market / Marketplace: e.g. Amazon.de (choose correct country & language).

Category: match your product niche (e.g. Supplements, Beauty, Pet).


5. Audience & Sample Size

  • Participant filter: active Amazon shoppers (optional filters: age, gender, Prime, country).
  • Quotas (optional): balance by target profile or census.
  • Sample size: at least 200 participants per variant (rule of thumb).
  • A/B → 400 total
  • A/B/C → 600 total


Significance note: The smaller the expected effect (e.g. +3pp), the larger your sample should be.



6. Display & Randomization

  • Display mode: Grid view like Amazon (keep default).
  • Position randomization: shuffle product & competitor cards randomly to avoid position bias.
  • Scenario assignment: randomize participants into A/B (50/50).



7. Configure Questions

Mandatory purchase question:

“When shopping on Amazon, which product would you purchase?” (single choice on product cards)


Recommended follow-ups:

“What made you click this product?” (multi-select + free text)

“What stopped you from clicking other products?” (multi-select + free text)


Optional: “How confident are you in your choice?” (scale)

These follow-ups reveal the drivers and barriers behind CTR results.


8. Quality Checks (QA)

  • Preview/Test run: open scenarios in preview (check desktop & mobile).
  • Content check: correct images, correct prices/claims, correct competitors.
  • Technical check: randomization enabled, cards clickable, questions/options correct.
  • Attention checks (optional): simple validation question to detect inattention.



9. Recruitment & Launch

  • Start sampling (activate recruiting).
  • Monitor progress and data quality (drop-out rates, time-on-task).
  • Pause or adjust if needed (e.g. rebalance quotas).

👉 Typical: n=400 (A/B) is reached quickly, depending on filters.


10. Analyze Results

  • CTR per variant (share of clicks on your product card): compare A vs. B.
  • Significance: check confidence (e.g. 95%) and p-value for A vs. B.
  • Drivers/Barriers: review follow-up results (top reasons, verbatim quotes).
  • Benchmarking (if available): compare to category/market.
  • Exports: download charts/CSV/PPTX for stakeholders.


11. Decision & Rollout

  • Roll out the winning variant on PDP/Ads.
  • Checklist for implementation: replace main image, update title/claim, refresh ad creatives.
  • Optional retest: after live rollout, test again to validate stability.



Best-Practice Rules

  • Test one variable per project (image or price or claim).
  • Keep competitors constant; only your card changes.
  • Always enable randomization (card position + scenario assignment).
  • Use sufficient N (≥ 200 per variant) for reliable results.
  • Always include the qualitative “why” (follow-ups).
  • Document hypothesis, setup, results, decision.



Common Mistakes & Fixes

  • Unclear effects: too many variables changed → redo test with isolated variable.
  • No significant difference: N too small → increase sample or recheck effect size.
  • Position bias: card order not randomized → enable randomization.
  • Competitors differ between scenarios: must be fixed → otherwise invalid comparison.



Pre-Launch Mini Checklist

[ ] Hypothesis & goal defined

[ ] Correct market/category selected

[ ] Variants differ by only one variable

[ ] Competitors identical & fixed

[ ] Randomization enabled

[ ] Follow-ups added

[ ] QA preview successful (desktop + mobile)



Example Naming Convention

[Brand] – [Product] – [Variable] – [Variant(s)] – [Market] – [YYYY-MM]

Example: Thorne – B12 – Main Image – A vs B – DE – 2025-08



Sample Size Guidelines

  • A/B: ≥ 200 per variant (400 total)
  • A/B/C: ≥ 200 per variant (600 total)
  • For small effects (< 5pp): plan larger N



Reporting Guidelines

  • Key result: “B beats A by +Xpp CTR (p<0.05)”
  • Drivers/Barriers: top 3–5 reasons + 2–3 verbatim quotes
  • Recommendation: what to roll out, by when, and owner
  • Risks/Next tests: what follow-up validation is planned




Updated on: 14/10/2025

Was this article helpful?

Share your feedback

Cancel

Thank you!