Understanding the new analytics dashboard in ABConvert is key to making confident decisions from your A/B test results. This guide walks you through each section of the dashboard and explains what each metric and chart means.
The new dashboard has two tabs at the top:
Overview — everything you need to evaluate your test: key metrics, performance charts, conversion funnel, revenue breakdown, and a segmentation table.
Test settings — your test configuration (audience rules, products, traffic allocation, force-assign setup) and the full experiment history timeline.
At the very top of the page, you'll see your experiment name — which you can edit inline — along with the test status and the dates it was created and started.
Overview tab
Basic Information
Display your test date, duration, total visitors, selected primary metric and set traffic split percentage.
Two filters available: Date and Compare
Date — select the time range you want to analyze, with last updated time and reload button aside
Compare — select which variant group to compare against the control. All analytics in ABConvert are pairwise: control group vs. one test group at a time. There is no combined "all groups" view or control between 2 test groups due to analytics mechanism. If your test has 3 groups (Control, Variant A, Variant B), the Compare dropdown will have 2 options. Selecting one updates the entire page.
Key Metrics
The Key Metrics section gives you an at-a-glance verdict on your test. It's the first place to look when checking whether you have a winner.
Leading declaration
When there's a clear leader, ABConvert shows a trophy icon and a line <Current selected variant group>'s <Primary Metric> is <uplift%> higher/lower than Control
For example, "Variant B (+15%)'s Revenue per Visitor is +13.3% higher than Control"
This means the system has determined that this variant performs best. The uplift% metric shown (Revenue per Visitor, Conversion Rate, etc.) is based on your test goal.
We also display the Prob. Beat Control%, which is the probability that the variant is genuinely better than the control — not just by random chance. This is the same figure as Win Chance in the chart, shown here as a single number.
When there's not yet a clear leader, you will see your Key Metrics display like this:
In this case, we recommend to let the test continue to run and collect data until it reaches a final winner.
Metric definitions
Est. Monthly Revenue Uplift
An estimate of how much additional monthly revenue you could expect if you roll out this variant to 100% of your traffic:
Est. Monthly Revenue Uplift= Revenue Uplift(variant revenue per visitor − control revenue per visitor) × daily visitors × 30 days
This is an extrapolation based on test traffic — useful for understanding the business impact, not a guarantee.Revenue Uplift %
The percentage difference in revenue between the variant and the control, measured per visitor:
Uplift % = (variant revenue per visitor − control revenue per visitor) / control revenue per visitor × 100%
Example: 12.4% means visitors in the variant group generated 12.4% more revenue on average.Est. Monthly Profit Uplift
The estimated additional monthly profit from rolling out the variant:
Est. Monthly Profit Uplift= Profit Uplift(variant profit per visitor − control profit per visitor) × daily visitors × 30 daysProfit Uplift %
The same calculation as Revenue Uplift, but based on profit per visitor rather than revenue. This is typically the more important number — a variant could increase revenue but reduce margin (e.g. if it drives higher discounts or lower-margin products). If you've set up cost data, always check this alongside Revenue Uplift.
Profit Uplift % = (variant profit per visitor − control profit per visitor) / control profit per visitor × 100%
Performance overview
The Performance section shows how your test groups are trending over time. There are three sub-tabs: Performance, Win chance, Significance. You can switch the metric across all three tabs:
• CVR (Conversion Rate) — % of visitors who completed an order
• RPV (Revenue per Visitor) — average revenue per visitor
• PPV (Profit per Visitor) — average profit per visitor
• AOV (Average Order Value) — average order value
Performance
Line chart of the selected metric over time for control vs. variant.
Win chance
A new statistical method we added to our app: a winning probability showing how likely the variant is to be outperforming the control, based on current data. Win chance runs a data simulation on your current results, and updated over time as more visitors participate in the test — it's especially useful for smaller merchants who can't realistically reach the sample size required for statistical significance. You can track the win chance trend to build confidence in a direction.
Significance
Statistical significance is the traditional method for evaluating test results. The goal is to determine whether the difference in performance between your control and variant is real — or just the result of random chance.
How it works
The test runs a statistical hypothesis: we start by assuming control and variant perform the same. If the data shows a large enough difference, we can reject that assumption and conclude there's a real effect. This is expressed as a p-value.
Default settings
ABConvert's new analytics uses fixed statistical settings:
• Hypothesis type: two-sided — tests whether the variant is different in either direction (better or worse), which is the standard for most A/B tests
• Confidence level: 95% — means you need a p-value below 0.05 for a result to be considered statistically significant
Understanding p-value and confidence level
The p-value is calculated from a Z-score — which measures how different the two conversion rates are compared to what's expected by chance. A lower p-value means the result is less likely due to random variation.
Confidence level determines how strict you are when evaluating the result. The default threshold is 95% confidence (p-value < 0.05): at this level, there is a less than 5% probability of obtaining the observed results, or more extreme results, given that the null hypothesis is true.
Result table
Uplift
The percentage difference in the metric you selected between the variant and the control:
Uplift = (variant rate − control rate) / control rate × 100%
Green = performing better than control. Red = performing worse.Probability to Beat Control
The same as Confidence, framed as a probability question: "What is the probability that this variant outperforms the control?" When this number is 95%+, you can reliably conclude the variant is outperforming.
Probability to Be Best
When testing multiple variants simultaneously, this answers: "Of all variants including control, what is the probability this one performs best?" A variant can beat the control but still not be the overall winner if another variant has even stronger results.
Confidence interval plot
Shows the confidence interval for each group's result (depend on which metric you chose) — the range where the true rate most likely falls.A narrower bar indicates more certainty, typically from a larger sample. And when a variant's interval doesn't overlap with the control's interval, it's a strong visual signal that the difference is real.
How to interpret results
For reliable insights, ensure your test has accumulated enough data — typically at least 10,000 visitors and 200 orders. Under 95% confidence level:
• p < 0.05 → statistically significant (real effect likely)
• p ≥ 0.05 → not yet significant (could be random variation)
If your store has lower traffic, Win Chance (above) may be more practical for decision-making while significance is still building.
Conversion
The conversion funnel shows how your customers (by session) move through the purchase journey, broken down by group (control vs. variant). It helps you understand where in the funnel the differences between your groups are showing up.
The funnel tracks all sessions' three stages:
• Add to cart: sessions who added a product to their cart
• Reached checkout: sessions who proceeded to Shopify standard checkout page
• Completed checkout: sessions who completed a purchase via Shopify's standard checkout and reached the thank you page
Note: We track events via Shopify, so external checkout page or express methods (Shop Pay, Google Pay, PayPal) that bypass Shopify standard checkout page and thank you page will not trigger tracking event, and will not be counted in the conversion funnel.
Each stage shows the count and the drop-off rate to the next stage. Comparing control vs. variant across the funnel can reveal whether a price change is affecting cart behavior, checkout hesitation, or final conversion — not just the overall CVR number.
Revenue
The Revenue section shows pairwise revenue metrics comparing your control and variant groups side by side, each with a bar chart for quick visual comparison.
Revenue per Visitor (RPV)
Average revenue per visitor. This is the single most useful revenue metric for A/B tests: it captures both conversion rate and order value in one number, so you can see the actual revenue impact of your variant without separating the two.
Profit per Visitor (PPV)
Average profit per visitor, calculated using your COGS setting in ABConvert setting page. Useful if you're testing a price change and want to understand the margin impact, not just top-line revenue.
Average Order Value (AOV)
Average value of all orders placed. Helpful for understanding whether the variant is changing how much customers spend per transaction, independent of how many visitors convert.
Breakdown table
The breakdown table lets you segment your results by different dimensions, so you can see whether the test performs differently across customer groups.
Use the tabs at the top of the table to switch dimensions:
• All: overall results without segmentation
• Country: results grouped by visitor country
• Device: desktop vs. mobile
• Visitor type: new visitors vs. returning visitors
• Traffic source: organic, direct, paid, referral, etc.
The columns in the table are also configurable: choose from visitors, sessions, add to cart, reached checkout, completed checkout, orders, CVR (sessions), CVR (visitors) AOV, RPV, and PRV. Select only what's relevant to your current analysis.
Test settings tab
Test settings tab shows the full configuration of your experiment alongside the results. It's useful for reviewing exactly what was set up — without having to leave the analytics page.
Test configuration
The left side displays your setting for the test, different test type will provide different configuration.
The right side displays your test details and audience setting, including
Traffic health: whether your experiment script and WebPixel tracking are all running healthy, contact Support if it shows unhealthy
Traffic allocation: the percentage of total store visitors who were entered into the experiment
Audience targeting: any audience filter applied (e.g. only new visitors, only visitors from specific countries, specific UTM parameters, custom JS conditions)
Force assign rules: any force assign rules applied
Experiment history
A timeline of everything that happened to this experiment: when it was created, started, paused, edited, and ended. Each event is timestamped. This gives you a clear audit trail — especially helpful if the test was modified mid-run and you want to understand whether changes affected the results.














