Reading analytics is a core part of performing A/B testing. In ABConvert, we have a dedicated analytics dashboard to report data to our users in real-time.
In this article, we dive into how our analytics works and how to interpret your data
Overview
In this article, we will explain three major parts of our analytics:
Dashboard components
Data collection process & calculation of metrics
Interpreting data
Dashboard components
There are three components in the analytics dashboard:
Summary section - report the summary of stats
Statistical tool - examine the statistical significance of your data
Breakdown table - display the granular level of your data
Summary section
In this section, we report summary statistics of your test.
In the first card, the user can see the visitors and the trend with a graph. This allows our users to monitor traffic.
In the second card, we report data in the form of a conversion funnel. Users can see at which point their customers drop off and observe the change in rate for different variants.
In the third card, the user can view the average metrics of revenue and profit. By looking at these metrics, users can know which variant is better for increasing profit or AOV.
Statistical tools
In order to make a fair decision based on the A/B test result, we provide our users a statistical analysis tool to examine whether the result is significant or not.
In brief, in each A/B test, we want to know whether version A is better than version B. But we don't know if it's just a result of luck; therefore, we need to do a statistical hypothesis test.
The hypothesis will be:
If version A is equal to version B.
Ideally, we want version A not to be equal to version B, either better or worse, so we can make a decision based on this result.
Hence, we provide a statistical significance tool for our users to check the statistical results.
In the Statistical Tool section, there are several parts:
Hypothesis settings
Result table
Hypothesis settings
In the part, the user can set up the way they doa hypothesis.
By selecting a one-sided or two-sided hypothesis, we are actually saying this:
We want to test if version A is equal to version B.
If failed, version A is worse than version B (one-sided)
If failed, version A is not equal to version B (two-sided)
Since the hypothesis result is actually a Yes or No response, setting the hypothesis to be one-sided or two-sided might depend on the user's objective.
As for setting the confidence level, we are essentially deciding how difficult it is to reject the hypothesis.
If they choose the lower confidence level (e.g., 90%), chances are that we think versions A and B are different, but in reality, they're not.
The higher the level, the larger the difference we need in a test to make a decision.
Result Table
In the part, we can see the result generated from the hypothesis test.
There are the following metrics:
Lift
Confidence
P-value
Conclusion
Lift
This metric tells you what percentage version B increases on your objective rate compared to version A. Ideally, you want to see a positive increase.
Confidence
This metric indicates your level of confidence in the result. If your confidence is higher than the confidence level, then the result is trustworthy, and the decision can be made based on the conclusion.
P-value
This metric is generated from the T-test of the hypothesis. The lower the value, the more significant the result is. Read more here.
Conclusion
We come up with the conclusion regarding an objective based on the metric.
The underlying logic is that when we observe positive Lift and the result is significant (at a confidence level), we conclude it'sΒ significantly better.
When the lift is negative and the result is significant, we conclude it's significantly worse.
Otherwise, it's either not significantly better or not significantly worse.
Breakdown Table
In this section, you can see the raw data at different granularities.
There are three levels of granularity:
Test group level
Product level
Variant level
Users can change the level to look at different data to understand the test result.
They can also select the column, like any dashboard, if they don't need some information.
We also let our user add costs to their product if it's not on the Shopify backend. Once we have the product costs, we can calculate the profit.
Data collection process & calculation of metrics
There are two methods of data collection in our analytics:
Storewide data
Product view-based data
These two methods are designed to support different types of stores since each store has a different way to attract traffic to different pages.
Storwide data
In this method, we focus on the visitor. Each visitor is defined as a visit lasting 30 minutes. For example, if a customer comes back to your website several times in 30 minutes, it's counted as one visitor.
Each visitor can only contribute to at most one event for an event type. These are the event data we collect:
Add to cart
Checkout
Order
So in this method, one visitor can only contribute to one add to cart, one checkout, and one order, even though they may add to cart several times and order more than one product.
Therefore, the conversion rate of the following metric will be calculated by:
Add to cart rate: # number of add to cart / # number of visitors
Checkout rate: # number of checkout/ # number of visitor
Order rate: # number of checkout / # number of visitor
If a store has more traffic to the home page and most people add to cart on the home page, then this method is more suitable, since we can observe the visitor behaviour regardless of page type.
Also, for users who tend to add more products to a test, this metric might be similar to their own Shopify analytics.
Product view-based data
In this method, we focus on the product view. Each product is defined as each time a user enters a product page at the same browser. If they open a browser window, visit a product page several times, then it will count as one product view.
Each product view can only contribute at most one event for a product. These are the event data we collect:
Add to cart
Checkout
Order
In this method, one visitor can view different products and add many products to the cart. Then, it will be counted as many product views and many add to carts.
Therefore, the conversion rate of the following metric will be calculated by:
Add to cart rate: # number of add to cart of product / # number of product views
Checkout rate: # number of checkouts of product / # number of product views
Order rate: # number of orders of product / # number of product views
This method is suitable for users who want to observe product-level changes of different versions and those who mainly direct to the product page.
Interpreting data
After a test, we want to make a decision based on the data.
In this section, we focus on how to interpret data for price testing.
But the logic can be applied to all tests.
There are two steps to interpret data:
Step 1
If I lower the price, I want to see the conversion rate significantly increase.
If I increase the price, I don't want to see the conversion rate significantly decrease.
Step 2
In either case, we want to see profit increase.
Ideally, for lowering price, we want to see the conversion rate significantly increase, plus the profit per view increase.
For increasing price, we want to see the conversion rate not significantly decrease, plus profit per view increase.
If the conversion rate hasn't changed significantly, we will have to check if profit per view changes significantly.
If not, then we stick with the original one.
Conclusion
In this article, we walk you through how ABConvert's analytics works and offer some guidance on making decisions.
If you have any questions, feel free to email [email protected]. I hope you enjoy this article.